So, a little announcement. I’ve started a new Substack newsletter/blog called My AI Obsession, which is about, well, you know. I’ll be posting there several times a week (while continuing to post here, of course), and I encourage you all to subscribe or, at the very least, check it out. Subscriptions are free; later, there will be paid options. My first post is here. (If you get any weird glitches trying to access it, please power through them; they’ll get fixed.)
As I explain in my “about” post there, the blog will attempt to offer a “unique and entertaining take on AI.”
Artificial Intelligence has rapidly progressed from something in the background of our lives to something we interact with directly—for some of us, on a daily basis. My new blog will cover the culture and technology of AI, its social and political implications, and the potential dangers it could bring.
I’ll use this blog to talk about everything from chatbot love to the possible end of human civilization. I’ll highlight serious issues about AI ethics and share silly experiments with generative AI art. I’ll try to make sense of the growing impact of AI on all of our lives, including my own.
I’ve been covering the culture of technology and the internet for decades. I first wrote about the internet back in the mid-nineties, when most people didn’t even have access to it, and I think AI will be at least as important to our lives, for better or worse, as the internet has turned out to be.
Anyway, I urge you to join me as I explore the uses and implications of this technology, which has started to come into its own after decades of small triumphs and frustrating failures.
There will be some silly stuff mixed in with the more serious takes. If you like this blog, I think there’s a good chance you’ll like the new one. See you there!
Follow me on Mastodon.
Send tips to dfutrelle at gmail dot com.
We Hunted the Mammoth relies on support from you, its readers, to survive. So please donate here if you can, or at David-Futrelle-1 on Venmo.
I have a bit of an issue with AI art because they use creator’s works without compensating the creators.
@ hippielady
That’s a really hot topic; and one I’d be interested in David’s take on.
A recent US court ruling has determined AI cannot hold copyright as it’s not a legal person (I follow the AI arguments because they can help with our animal personhood stuff).
But as for AI ripping off other artists, that’s not straightforward. The test for when appropriation art gets its own IP is where the creator has exercised skill and/or judgment in the derivative work.
So a photocopy cannot attract a fresh copyright, but if an artist used a pencil to replicate a photograph to the extent it was a perfect copy, that would. Also, if you either use another work as an inspiration for something novel, or even use the work itself but adapt it in some way, that passes the test as you are contributing your own thought processes so as to create something truly transformative.
But an AI seems much more akin to the photocopier example; it’s just a machine and no thought is involved. So it’s difficult to see how the AI itself could pass the relevant test; although arguably the creators of the AI could. Or are they just akin to the inventor of the photocopier?
We’re awaiting a decision in a human appropriation case so that might give some answers of general applicability that can be applied to AI.
@Alan Robertshaw
To me and a lot of artists I know, it is less a matter of IP law or whether a computer can have skill and more an issue of these AI being trained on thousands of images taken without credit, compensation or permission (and many artists will use whatever means they have to loudly shout that we do not give permission, but that doesn’t affect anything) and will then be used to get an end product without needing to pay anyone for any work. This isn’t an actual new intelligence being creative on their own, it is algorithms who for a while kept replicating echoes of Getty image watermarks and artist signatures because it is just mashing together data from artwork taken from others.
A similar issue exists in the translation industry where companies have taken machine translation and either hired translators to “edit” them for lower pay (even if the machine translation is so bad it requires starting from scratch) or just releasing it as is, because it means they don’t have to pay anyone at all for any work.
Basically, AI art currently has a lot of ethical issues and is not great for artists, even if it is eventually decided in a court that AI makers own the products they made by feeding other people’s work into their machine. If that makes sense.
Seems to be the AI doesn’t get a copyright any more than a photocopier does; but neither is it infringing a copyright, since the AI’s output is not a simple copy of what it trained on.
I also don’t see an ethical issue in an AI training on the output of famous painters. First of all, so do new human painters, and we don’t consider their works to be infringing on the artists who contributed to their “training data”; and second, most of the authors of the training works are centuries dead anyway.
In the end, learning from those who came before us is a positive-sum thing, and I don’t think we want to be obstructing that with legal red tape and/or tollbooths.
There’s a pretty significant difference between an actual human artist learning from the techniques of other artists in order to create their own original work and an algorithm scraping the work of actual human artists and then generating a knockoff of those artists’ work without their consent. One is an actual learning process, the other is automated plagiarism with enough randomisation to circumvent existing copyright laws.
Beyond the pure ethics of it, the other, far more serious issue is that AI content generation – and it’s important to use accurate terms like “content generation” as opposed to muddying the waters with the fundamentally misleading term “AI art” – is going to end up destroying the livelihoods of actual human artists. There’s already cases cropping up of companies using AI in order to replace artists, and without significant legal restrictions we’re going to very quickly see art as a profession being largely wiped out by automation. Of particular relevance to the issues this site deals with, it’s going to have an especially devastating impact on artists from traditionally marginalised backgrounds, and/or anyone who doesn’t produce the kind of generic white guy material that the industry might consider marketable enough to pay a human being for instead of just getting an algorithm to crank out massive quantities of free content based on art stolen from the same creators it’s now put out of a job.
There have been job losses due to automation before, and the solution has never yet been that proposed by Luddites.
Funny how “Luddites” has become to people concerned about AI what “woke” is to people concerned about social justice issues. Fuck anyone who uses that term unironically, which are either Libertarian techbros or their enablers.
@Reaktor – I think it depends what you mean by “Luddites.” I have some conflicting thoughts both ways.
For instance, I get a bit impatient with my father when he tries to blame the internet for things it’s not responsible for, like me getting distracted. I have ADHD; my superpower is I can waste time on anything.
Then again, I’m wary of things like two-device authentication and the justification that it’s better security. Better security for who?? Corporations more than individual users, I’m sure. I’d like to be able to log in to my bank or email in an emergency if I need to use a laptop that’s not mine…and it’s more likely that in such a situation I’d be missing my phone. I worry sometimes about that happening. Besides, depending on individual devices seems to run counter to the whole “cloud-based” idea.
So, my own rant aside, I think there’s room for nuance: that doing away with particular tech isn’t necessarily the solution, but the systems or mindsets around the use of that tech might be challenged because they’re likely upholding unfair power structures.
My new job is travelling back in time to assassinate future leaders of the human resistance. So I think that should be pretty safe from automisation.
Perhaps one of the most insidious aspects of AI content creation is that it takes advantage of people trying to assist other humans.
The software trains the AI can only do so with images where people have provided alt-text. The more people try to assist people with visual disabilities, the more useful the image is for AI skimming.
If I just put up a photo of my dog in a hat, with no accompanying text, the image is useless for AI purposes. But if I put a detailed description “Belgian shepherd dog wearing paper crown obtained from Christmas cracker” etc then that allows the AI to know what a Belgian Shepherd looks like, what a Christmas cracker hat looks like etc.
Some image hosting sites do allow you to opt out of allowing your images to be used to train AIs. Some, like Deviantart, have that set as a default. But you then get into issues of enforceability. You would have a legal claim for breach of copyright; but in practical terms, what can you do about it? Although if some major artist uses AI and produces a highly valuable piece, then maybe we will get a test case.
I think that given the power of AI in the future and it’s need for training data, I firmly believe that content creators should get compensated for their work getting used in training AIs. Not sure how this could be done but it should be done.
Also I think we need to ensure AIs never get trained on AI generated content. I cannot see how that would provide anything useful. The power is it’s ability to mimic human generated content.
That said, how these systems work is interesting with regards to copyright and derivative work. For instance, the text generators are basically just scaled up text predictors we see on our phone, but instead of just one persons text as training data it’s a huge corpus of text scraped from the internet. They must do something to analyze the prompt as well but basically it generates words by using a statistical score for what the next word would likely be if a human was writing the given text. It has no intent or understanding what it’s writing.
And that brings us to the issue of bias and harm. If you scrape a huge chunk of random text on the internet there is a lot of hate, bias and bullshit in that sample. And the AI has no idea of what is harmful unless it’s programmed to understand that. Given a specific promo it will generate what is statiscally likely to match that prompt, happily generating vile text. The fact that it will generate this hateful content says a lot about the state of our writing and what is out there.
And this is the issue, who is accountable for this? Personally I think the AI company should be, they should have standards to prevent harm. Not the OpenAI model, using ChatGPT it feels more like an HR dept idea of preventing company scandal, not the same thing.
Like David, I have also been a little obsessed with this topic.
I’m not good with tech – can anyone who is tell me if I’m doing something wrong, or if there’s a reason the background is pink on my laptop and black on my phone even when I put the phone on desktop view? Thanks. If not just ignore this.
On topic – I hope there’s a way to do AI drawings that compensate the original artists. A lot of it is beautiful.
@ Surplus to Requirements
I’m sorry, I guess I can’t be so relaxed about artists who’ve worked hard for years or decades to develop their skills having their work stolen in order to feed a technology that’ll take away their ability to make a living so content publishers can maximise their profits a little bit more, or human creativity being discarded as nothing but an inefficiency to be automated away. I guess I’m just one of those luddites who hates progress and isn’t forward–thinking enough to acknowledge how much better things will be once creative expression is excised from mainstream culture in favour of the far more efficient output of software guided by marketing departments.
@epitome of incomprehensibility:
As I said, I only mean “Luddites” in the sense that is currently being applied by techbros and their supporters, like Surplus in this case, to anyone who dares to show even a bit of apprehension towards their shiny new
toysAI tools and their potential to disrupt people’s livelihoods. The historical Luddites destroyed machinery, and while there are certainly a few people out there who want AI tech prohibited to oblivion, I think that the majority of us just wants to see it regulated properly, not gone.PS: despite the earlier sarcasm, I do think that AI image generation is a useful tool. But, again: only if its usage is regulated.
I am a complete technoklutz, but to those who know what they are talking about more than I do, which is all of you – have you come across a (freeware I think) thing called “Glaze” – a little tool you can use on your images before uploading to public view, that does not mess them up to the human eye but fucks them up for scraping purposes.
If it’s any good – which from what I’ve read, it seems to be? – it might be something that is worth using and passing around? It’s certainly getting talked about a lot online at the moment!
@Lisa, you probably have your phone in night mode (black background) and computer on day mode (pink). Night mode is easier on the eyes. There might be a white square about 1cm by 1cm with a crescent moon in it at the bottom right hand side of the screen as you look at it. Tap that and it turn the screen pink, tap again and it goes back to black.
About the AI content question: there’s been some upset in the SFF creatives community, because artists are getting their work ripped off by AI, and other people are reposting the AI generated art as their own. It’s not right and it’s not fair.
About Luddites: Luddites were a working class protest movement. Destroying machinery was one of their tactics, they also tried to burn down mills a d go on strike. The Luddites were viciously abused by factory owners, the state and by journalists. They were subject to imprisonment, fines and transportation when they weren’t being murdered by the army.
It wasn’t specifically about the machines themselves, but about the way employers were treating employees, the cut in wages, the poor working conditions, the artificially high food prices, and the damage early capitalism and the factory system was doing to families, who could no longer afford to eat or maintain their skilled trades, and to communities who were being destroyed, while having their homes and communal land take away. Machinery was a symbol of that, not the thing in itself.
Personally, I have no problem with Luddites protesting, but using ‘Luddite’ as an insult to suggest that a person you disagree with is afraid of new technology dates back to the original protests, the connotation being made by those journalists on the side of ‘king and country’. Basically, its origins are snobbery and a deliberate misunderstanding of what the followers of General Ludd (folkloric figure, not a real person) were protesting about. Using luddite as an insult implies you are sneering at someone’s beliefs and intelligence, and also tells me a lot about the person who uses it, mainly that they’ve got their head firmly wedged up their arse and can’t be bothered to pull it out long enough to understand other people’s perspectives.
There is an argument that the Luddites didn’t have a problem with the machines per se.
They just made an obvious target to highlight some more general grievances; and the economic consequences gave them leverage.
Sort of like how railway and health workers don’t have any issue with public transport or the NHS; but they may still strike and shut things down to make a point. And extrapolate that to general strike.
@Pope of Discord:
The problem here isn’t the technology. It’s capitalism. Same as when factory machines put cobblers out of business two centuries ago.
@ Surplus
I completely agree, but artists have every right to be furious and frightened about AI because they happen to still live under capitalism. They’ve spent thousands of hours mastering a skill and applying their own creative energy to create art – often at great personal and financial cost (no one becomes an artist to get rich). They were then encouraged by everyone to upload their work online which was then, without their knowledge or consent, used to feed a machine that can mimic their style and essentially steal their ideas without compensation or credit.
This is one of those things which is all intellectual to those it doesn’t effect but actually very upsetting to those it does. I was chatting with writers discussing using AI art to create book covers (including in the style of x living artist) and they were extremely blithe about the potential ethical concerns – until someone pointed out that someone will be able to type in “write a 70,000 word fantasy novel with fairies, robots and a plucky female protagonist in the style of (insert favourite writer)” and it could well mean that modest but comfortable living self publishing in their niche dries up. It’s also going to kill a huge swath of software developer/coding and advertising jobs. The prospect of losing your employable skills under capitalism, and to rub salt in the wound through technology *that wouldn’t exist without the uncompensated contribution of you and thousands of like you*, is a life-changing and frightening prospect.
David Bowie once got sued for sounding too much like David Bowie; by the company that owned his back catalogue.
And with that being vaguely relevant to the topic at hand, I can justify posting this. And I guess it also touches on gender non conformity, or masculinity, or some other stuff this blog covers.
@Surplus to Requirements:
Oh, yeah? Well, guess what: like crypto, training and maintaining AIs is resource-intensive and generates a dispropotionately large amount of carbon emissions.
Move Aside, Crypto. AI Could Be The Next Climate Disaster.
Sure, I guess it can improve with time, but the technocrats currently pushing the tech forward couldn’t care less, could they? As far as I’m concerned, anyone currently in favor of the widespread implementation of AIs is in favor of capitalism as well; you are an ally and cheerleader of capitalism wether you’re willing to accept it or not.
Non sequitur. And I challenge you to find anywhere where I have “cheerled capitalism”.