tags: #publish
links: [[Artificial Intelligence]], [[Software and Tech]]
created: 2022-09-02 Fri
---
# AI Horrorlog
An inadequately-maintained log of some **recent developments** in AI that are *challenging assumptions, disrupting existing industries, or fundamentally breaking foundations of society* with wide-ranging impact.
### 2023-02-14 Bing Chat prompt injection, and how chatbots are given an identity
https://arstechnica.com/information-technology/2023/02/ai-powered-bing-chat-spills-its-secrets-via-prompt-injection-attack/
Illustrating the fragility of trying to restrict large language models by just giving them setup instructions, the researcher was trivially able to get it to disclose its own instructions, and to get it to ignore parts of them.
This is also a nice intro to how these chatbots are actually implemented and given an identity. You just give it a description of its identity and behaviour, then segue into a dialog between it and a human, append your user's input, then the model just predicts the next thing it will say in a way that's consistent with the identity description. Effective, but seemingly just as "trickable" as a human!
### 2023-01-08 AI barristers
Fancy a contempt of court or perjury charge? DoNotPay can help you get one, via this handy earpiece to sneakily wear in court, so you can just repeat to the judge what its AI tells you to instead of paying a real lawyer: https://gizmodo.com/donotpay-speeding-ticket-chatgpt-1849960272
Perhaps it'll revolutionise justice? More likely, it'll upset a lot of legal folks who won't take kindly to having their dignity and serious professional standing challenged by a chatbot.
If this sort of thing succeeds though, I think we're heading for a world where most things are done by:
a) expensive bots talking to each other and exercising questionable levels of judgement;
b) expensive bots talking to menially-paid humans, who bear the brunt of dealing with the insanity and injecting real-world knowledge into the situation.
Neither sounds like an improvement really?
What this stuff won't achieve is removal of bureaucracy, but rather partial automation of it - so we can have more of it! Because that's where the profit incentives will naturally drive it. Likely with the same number of humans on the margin of it, but performing increasingly unpleasant tasks as their set of colleagues expands to include a lot of machines.
### 2023-01-04 Adobe Lightroom may be merrily using people's images to create AI training datasets without consent
Default opt-in, at least for some users:
https://toot.cafe/@baldur/109630505660962387
https://news.ycombinator.com/item?id=34257224
Hope you don't mind their AI creating replicas of your style or private client information...
Good luck getting it taken out once it's in there
### 2023-01-04 AI audiobook narration
Let's get rid of the artform of performance narration, and replace the actor/performer with a low-cost AI equivalent which doesn't actually understand the concept of performance! That sounds like a profitable wat to destroy culture. Thanks, Apple, I'm sure Amazon and Google will join you shortly:
https://www.theguardian.com/technology/2023/jan/04/apple-artificial-intelligence-ai-audiobooks
### 2022-12-18 Artists fight back using Disney
AI companies have decided they can freely exploit others' work without compensation or attribution.
Uh, but what if that work belongs to a large powerful corporation, instead of artists who don't have the resources to force the law to catch up with this? Same answer still? Let's see:
Fighting AI copyright infringement by making derivative works of Disney and Marvel etc:
https://vmst.io/@selzero/109512557990367884
### 2022-12-18 Riffusion - music interpolation and regeneration via spectrograms and Stable Diffusion
https://www.riffusion.com/
Is this horror? Not quite yet, but heading in a similar direction to Dramatron, I guess
### 2022-12-17 Dramatron
AI scriptwriting "co-writer"
https://deepmind.github.io/dramatron/details.html
*No.* Just no.
### 2022-12-12 Lensa and AI bias
A fine demonstration of training dataset bias: Lensa app to create avatars gives rather different treatment for female-looking input images vs male-looking ones:
https://www.technologyreview.com/2022/12/12/1064751/the-viral-ai-avatar-app-lensa-undressed-me-without-my-consent/
### 2022-12-10 Watermarking AI-generated content, and a new arms race
https://techcrunch.com/2022/12/10/openais-attempts-to-watermark-ai-text-hit-limits/
How do you identify AI generated content? E.g. if you would like to prevent plagiarism and passing-off-as-human, detect fake or generated content, etc.
There's some nice-sounding statistical watermarking approaches here but firstly, that immediately becomes an arms race in the above contexts, and secondly it relies on the AI owner turning on watermarking, so it's useless as soon as you opensource the system.
### 2022-12 ChatGPT and reinforcement learning AI safety mechanisms
OpenAI grants public access to their chatbot, which is capable of conversations that accumulate context.
Plenty has been written elsewhere about what industries this is going to disrupt, so let's focus on even more serious dangers: there's a live proof here that we're unable to control even these narrow-focus unintelligent systems.
ChatGPT shows the rapid advancement and complexity of large language models, but more importantly the difficulty of "safety" approaches that rely on humans training the AI after it does bad stuff, to not do similar bad stuff again. Refining such a safety mechanism appears to have been OpenAI's goal from this trial, with restrictions visibly tightening as people played with it, as shown by [this thread which has lots of interesting examples both of its capabilities, and of dodging restrictions](https://news.ycombinator.com/item?id=33847479).
Lots more [from Scott Alexander on how scary this kind of "safety" is](https://astralcodexten.substack.com/p/perhaps-it-is-a-bad-thing-that-the).
This "safety" (yep, deserves scare quotes) has rather obvious flaws:
- It has to do some bad stuff before you teach it not to, so the approach is utterly unsuitable for a context that involves real-world risk.
- Relies on a big team of humans doing menial work to assess the results and do the reinforcement training.
- Doesn't help in the slightest with the usual problems of training dataset bias.
- No substantial ability to adapt to qualitatively new dangerous situations. The real world tends to provide many of these, and the examples in the articles above are pretty good evidence that determined creative people can come up with a great many ways around the training.
- Current methods don't seem to achieve internal consistency of responses, even. You can get the AI to tell you it shouldn't do something and explain why, then you can immediately get it to do the thing. This... does not inspire confidence in this approach, or that there is any "real" understanding happening.
Google appears to have stuff at a similar level but is being rather more cautious about opening it to the public - not because they're worried about safety really but because they're worried about *reputational risk*!
### 2022-11 Police murder robots
*Ok so it's not really AI **yet** but an alarming enabling shift in attitudes*
Oh yeah let's make robots with bombs and give the police permission to use them to blow up troublesome people, with no particular need to try less violent approaches first. Maybe we'll automate some of that at some point. Sounds fine
https://www.sfgate.com/politics/article/San-Francisco-approves-lethal-robots-17619556.php
(It seems they went back on some of this after the entirely predictable outcry - for now)
### 2022-11-03 Now we can easily make generative models "fine-tuned" on a specific artist's work
https://waxy.org/2022/11/invasive-diffusion-how-one-unwilling-illustrator-found-herself-turned-into-an-ai-model/
Only need a few images to train it. Cheap and easy to do.
And it's likely that's not infringement, and the artist has no way to prevent it.
The results are currently good enough to be recognisable as the artist's style by the general public, but bad enough to be an uncomfortable distortion and violation of integrity for the artist and anyone with taste.
### 2022-10-27 AI Data Laundering
https://waxy.org/2022/09/ai-data-laundering-how-academic-and-nonprofit-researchers-shield-tech-companies-from-accountability/
Large companies are routinely using non-commercial-licensed content to train their commercial AIs, by backdooring copyright legislation by having academic researchers create the dataset and train for them.
Courts in various places have ruled that some users are fair use because they are "transformative", but it's not clear yet whether the same applies to AI training. If send qualitatively quite different from, say, [digitising book content to provide search](https://en.m.wikipedia.org/wiki/Authors_Guild,_Inc._v._Google,_Inc.), because inn this case the content is used to *produce* new look-alike replica content.
### 2022-10-28 South Korean illustrator's work replicated within days of his death
https://restofworld.org/2022/ai-backlash-anime-artists/
Part of a general storm of controversy about generative AI in South Korea and Japan.
### 2022-10-24 Monetising language models: machine-generated books full of nonsense
https://lcamtuf.substack.com/p/fake-books
Number #3 on Amazon for books on NFTs!
### 2022-10-07 Even more text-to-video on the way
https://imagen.research.google/video/
Not too be outdone by Meta's announcement, Google AI announces their own effort: a bit better, a lot sharper.
There's another parallel effort focusing on length and coherence.
Advice. This would be a poor time to attempt to start a career in any creative field producing something that can be represented by a computer and where the value is solely in the surface level output, rather than the process of creation or its meaning.
### 2022-09-30 AI text-to-video generation is on the way
Meta has [a system that generates short videos from text prompts](https://www.theverge.com/2022/9/29/23378210/meta-text-to-video-ai-generation-make-a-video-model-dall-e). Not amazingly good so far, but if there's one thing not to expect in this field, it's slow progress.
### 2022-09-03
StableDiffusion [publically released its model, code, training weights](https://stability.ai/blog/stable-diffusion-public-release) in late August, following a release to researchers a few days earlier.
This model can generate realistic, detailed images from a text prompt, or *can be seeded with a cruder image and realise it*, even a crude crayon sketch.
The improved aesthetics over Dall-E, and especially the image-to-image "seeding", already disapproves several of the reasons why [this person writing only a few days earlier wasn't concerned about the impact on illustrator jobs](https://emmanuel6.medium.com/why-dall-e-will-not-steal-my-job-6a1e2943cb82).
The really significant thing here is it's a generally-available public release, you can run it locally, you don't need particularly special hardware just a decent GPU, and there's also a cheap hosted version. Most previous AI image generation tools have been closed or limited to researchers or limited.
[Here is some of what it can do](https://andys.page/posts/how-to-draw/)
[Here's a non-hyperbolic take on some implications](https://thealgorithmicbridge.substack.com/p/stable-diffusion-is-the-most-important)
[Here's some actual real info](https://www.paepper.com/blog/posts/how-and-why-stable-diffusion-works-for-text-to-image-generation/)
[Plenty more scare threads here on why that's going to change a few things](https://news.ycombinator.com/item?id=32555028). Suddenly, generation of arbitrary artificial images is available to a large public audience.
The tools are released with an "ethical use" licence. Nice try, but that's not going to meaningfully constrain anyone malicious...
### 2022-09-02
MidJourney generated images [win first prize](https://www.vice.com/en/article/bvmvqm/an-ai-generated-artwork-won-first-place-at-a-state-fair-fine-arts-competition-and-artists-are-pissed) in digital arts category of a fine arts competition, pushing out artists who actually created their vision themselves rather than just wrote about it.
Creator just fiddled with image generation prompts then processed in Photoshop and used another AI tool, Gigapixel, to upscale by inventing more detail.
### 2022-08
Callcentre tools that can change the accent of callcentre operatives as heard by customers.
You know, to make those foreign-sounding people sound American or Australian to support a fake brand image or to stop the customers being racist to them.
### 2021-08-17 Boston dynamics robots do parkour and backflips
https://youtu.be/tF4DML7FIWk
Excited to see these things joining your local militarised police force in a few years? Should be fun.
### Unknown
[Gigapixel](https://www.topazlabs.com/gigapixel-ai), an AI tool that invents details and textures to enhance resolution of images or video, based on a large corpus of training images.
There are some fantastically useful applications of this in print, creative work, restoring or remastering poor or legacy content, etc.
But: So you thought seeing a nice high-res image or video gave some confidence that it wasn't an AI-generated fake? Think again.