The Snug

Welcome to The Snug - a friendly place for discussions created by the community for the community. Register a free account today to become a member! Once signed in, you'll be able to participate on this site by adding your own topics and posts, as well as connect with other members through your own private inbox!

AI, it's future in entertainment & perhaps exploitation

Jay Mysteri0

Well-known member
Joined
Aug 5, 2022
Messages
3,274
Reaction score
7,873
If you recall this moment in movie history...
01ROGUE1-facebookJumbo-v5.jpg


Peter Cushing in Rogue One, years after his death. Some lamented that actors may eventually be unnecessary. That of course hasn't happened. Yet.

Skip to the rise of NFT's, the popular beginnings of the 'tech bros' clinging embrace of true capitalism. All they had to do to earn riches was just take the works of others ( art work / illustrations ) claim they had some divine right to authorize ownership, and sell it to others. As if it was some kind of collectors item they could profit off of, while doing absolutely no work. How successful was this scam? A certain former 2X impeached failed president got into the game, showing how pathetic the scam always was.


Eventually as NFT faded away embarrassingly before the crash of crypto, AI has risen as the new darling. How do the tech bros glom onto this capitalist wet dream scheme? Why once again going after the lower & most unprotected sector. The art community. As the rise of AI programs like Midjourney that "scrape" the internet scooping up the works of any artists, then compiling them into supposedly new works. Tech bros claiming that the former starving artists who should have looked for better jobs, were now elitists, and they were the saviors democratizing the art world. That is until others copped their formulas to make their own works without attribution, and suddenly those individuals became the exploited victims.

Where did this eventually run into bumpy road? When AI was trained on music, which has the RIAA. Whereas many before when AI was stealing art it wasn't a big deal, when it came after music, it suddenly became "Whoa nelly"!


I've skipped over a few steps to get to this final point, but there's plenty you can read up on AI if you wish. Such as Grimes and her inability to grasp how copyright works. To the whole mystery involving a Drake / Weeknd song that was supposedly AI generated. One thing to keep in mind is that large companies have kept a close eye on all of this. Some orgs feeling that they don't really need an art dept if they can just use AI to scrap together the images they need. Musicians are calling for an eye on AI, which has interestingly brought out a rash of similar sounding articles about musicians already using AI. As if an attempt is being made to tamp down the worries of AI in the music world. AI is even a topic in the current writer's strike, as it's been floated that some studios would like to consider AI written scripts. The studios would THEN hire a writer to "clean up" the AI script, meaning a writer would make less money of course.

Which brings us back to movies...



Already we had a bit of controversy involving a group using AI to make supposed anime. Which wasn't really anime looking, but once again scraped without attribution or recognition of the works it stole from. Instead it was tried to be billed as making anime more accessible.


What concerns many about this movement of course, is that by using AI, a group with finance can do the "work" of many. Without of course having to pay the many or even give them credit, by using the works of others already online. Anyone could do this.

One possibility that caught my eye.



Why so? Because in today's climate with all the hand wringing over CRT & wokeness, we could find ourselves seeing another pivotal moment in movie history revisted.



It isn't uncommon in the race to exploit new frontiers, to ignore the harm it can do to others along the way.
 
Last edited:
I listened to an interview this morning on NPR, where an expert from an algorithm institute in Canada (sorry did not catch the institute or his name) claimed that that we have entered the danger zone with AI. That AI can be programmed to seek and formulate it’s own goals and the danger is handing it agency, the ability to make changes Independently.

The expert cited an example where a Russian Early Warning System signaled an ICBM launch from the United States, and the officer who was in the position to push the button, did not, because he said it did not feel right. The early warning system was in error, there was no launch from the US, and a machine programmed to respond Independently would have sent nuclear missiles to the US.


A different interview:

Leading experts warn of a risk of extinction from AI​


In a recent interview with NPR, Hinton, who was instrumental in AI's development, said AI programs are on track to outperform their creators sooner than anyone anticipated.

"I thought for a long time that we were, like, 30 to 50 years away from that. ... Now, I think we may be much closer, maybe only five years away from that," he estimated.

Dan Hendrycks, director of the Center for AI Safety, noted in a Twitter thread that in the immediate future, AI poses urgent risks of "systemic bias, misinformation, malicious use, cyberattacks, and weaponization."
 
Ever since I’ve been hearing about AI I’ve wondered whether Asimov’s laws are applicable. IOW, can AI be programmed to obey them?

I also wonder whether they should all be implemented. Where they pertain to AI…

First Law​

A robot may not injure a human being or, through inaction, allow a human being to come to harm.​

Second Law​

A robot must obey the orders given it by human beings except where such orders would conflict with the First Law.​

Third Law​

A robot must protect its own existence as long as such protection does not conflict with the First or Second Law.​
…given that these laws were created pre-internet and designed to apply to individual robots, I wonder how foolproof the second and third laws would be in a world where everything can be connected to everything else. I’m not sure that I would trust that these laws would never be violated.
 
Ever since I’ve been hearing about AI I’ve wondered whether Asimov’s laws are applicable. IOW, can AI be programmed to obey them?

I also wonder whether they should all be implemented. Where they pertain to AI…

First Law​

A robot may not injure a human being or, through inaction, allow a human being to come to harm.​

Second Law​

A robot must obey the orders given it by human beings except where such orders would conflict with the First Law.​

Third Law​

A robot must protect its own existence as long as such protection does not conflict with the First or Second Law.​
…given that these laws were created pre-internet and designed to apply to individual robots, I wonder how foolproof the second and third laws would be in a world where everything can be connected to everything else. I’m not sure that I would trust that these laws would never be violated.
I meant to mention that but forgot, I’ve been a fan of this laws ever since I read I,Robot. Easy to program your Android butler to not hurt or allow a human to be hurt. But then what happens when your home is invaded? Those laws will be diluted or set aside for special circumstances that the Android will have to evaluate. Android soldiers, a sure thing. The thing is AI as envisioned today is much more dangerous if you are putting it in control of things like defense systems and giving it agency to destroy millions of people. It’s not going to care about people unless it’s programming makes it care and that is by no means certain.
 
I do not believe the systems making the news these days are being properly labeled. They are not artificial intelligence. The best term I’ve seen that describes what I think they are: Applied Statistics. I saw it in this fascinating interview with the sci-fi author Ted Chiang on the topic.

 
This isn't going to go well


In the article I just read and linked above, Ted Chiang compares chat bots to Tom Hanks’ volleyball in Castaway.

We walk in silence for a few minutes and then suddenly he asks me if I remember the Tom Hanks film Cast Away. On his island, Hanks has a volleyball called Wilson, his only companion, whom he loves. “I think that that is a more useful way to think about these systems,” he tells me. “It doesn’t diminish what Tom Hanks’ character feels about Wilson, because Wilson provided genuine comfort to him. But the thing is that . . . he is projecting on to a volleyball. There’s no one else in there.”
https://archive.is/o/vEth7/https://www.ft.com/content/18337836-7c5f-42bd-a57a-24cdbd06ec51
He acknowledges why people may start to prefer speaking to AI systems rather than to one another. “I get it, interacting with people, it’s hard. It’s tough. It demands a lot, it is often unrewarding,” he says. But he feels that modern life has left people stranded on their own desert islands, leaving them yearning for companionship. “So now because of this, there is a market opportunity for volleyballs,” he says. “Social chatbots, they could provide comfort, real solace to people in the same way that Wilson provides.”
 
This isn't going to go well


Why am I reminded of the booth with the Jesus-like face and the synthesized voice in THX 1138 which robotically asks, “Could you be more…specific?”

Seems to me therapy is in part about picking up nuance and delving into what’s left unspoken. Hard to believe AI can do that well.

With regard to the entertainment aspect, aside from the obvious rights issues, it would be interesting to see a “new” Cary Grant movie—but getting all the little gestures, posture, timing etc. right is a lot harder than just replicating the person’s actual appearance.
 
In the article I just read and linked above, Ted Chiang compares chat bots to Tom Hanks’ volleyball in Castaway.
i think that is a poor analogy because the difference is that Wilson, besides the physical ball existed completely. inside Hank’s character’s head.

What they are calling AI, is AI baby steps. Disclaimer, I am not pretending to be an AI expert. 🙂 First step in AI is give it programming and info and get it so that it can conduct a conversation with a human being, a significant accomplishment. Passing the Turing test will be monumental because mimicking a human is no small feat.

So maybe Applied Statistic is a good term at this point, but you can already see that the first step will be in the market place, are the conversations and replacing humans answering the phone at the office. Hell, they’ve just about already done that, or at least they have already moved towards that with half assed answering systems. I don’t enjoy yelling at these bots, but often I raise my voice because they are so annoying and I am annoyed having to navigate this artificiality,, often put on hold for periods of time before I can conduct my business.
 
If you recall this moment in movie history...
01ROGUE1-facebookJumbo-v5.jpg


Peter Cushing in Rogue One, years after his death. Some lamented that actors may eventually be unnecessary. That of course hasn't happened. Yet.

Skip to the rise of NFT's, the popular beginnings of the 'tech bros' clinging embrace of true capitalism. All they had to do to earn riches was just take the works of others ( art work / illustrations ) claim they had some divine right to authorize ownership, and sell it to others. As if it was some kind of collectors item they could profit off of, while doing absolutely no work. How successful was this scam? A certain former 2X impeached failed president got into the game, showing how pathetic the scam always was.



Eventually as NFT faded away embarrassingly before the crash of crypto, AI has risen as the new darling. How do the tech bros glom onto this capitalist wet dream scheme? Why once again going after the lower & most unprotected sector. The art community. As the rise of AI programs like Midjourney that "scrape" the internet scooping up the works of any artists, then compiling them into supposedly new works. Tech bros claiming that the former starving artists who should have looked for better jobs, were now elitists, and they were the saviors democratizing the art world. That is until others copped their formulas to make their own works without attribution, and suddenly those individuals became the exploited victims.

Where did this eventually run into bumpy road? When AI was trained on music, which has the RIAA. Whereas many before when AI was stealing art it wasn't a big deal, when it came after music, it suddenly became "Whoa nelly"!



I've skipped over a few steps to get to this final point, but there's plenty you can read up on AI if you wish. Such as Grimes and her inability to grasp how copyright works. To the whole mystery involving a Drake / Weeknd song that was supposedly AI generated. One thing to keep in mind is that large companies have kept a close eye on all of this. Some orgs feeling that they don't really need an art dept if they can just use AI to scrap together the images they need. Musicians are calling for an eye on AI, which has interestingly brought out a rash of similar sounding articles about musicians already using AI. As if an attempt is being made to tamp down the worries of AI in the music world. AI is even a topic in the current writer's strike, as it's been floated that some studios would like to consider AI written scripts. The studios would THEN hire a writer to "clean up" the AI script, meaning a writer would make less money of course.

Which brings us back to movies...



Already we had a bit of controversy involving a group using AI to make supposed anime. Which wasn't really anime looking, but once again scraped without attribution or recognition of the works it stole from. Instead it was tried to be billed as making anime more accessible.



What concerns many about this movement of course, is that by using AI, a group with finance can do the "work" of many. Without of course having to pay the many or even give them credit, by using the works of others already online. Anyone could do this.

One possibility that caught my eye.



Why so? Because in today's climate with all the hand wringing over CRT & wokeness, we could find ourselves seeing another pivotal moment in movie history revisted.





It isn't uncommon in the race to exploit new frontiers, to ignore the harm it can do to others along the way.

Without laws protecting jobs (when has that ever happened regarding automation?) if it technically can happen, it WILL happen, people will be replaced, there is a long tradition of this pattern.

Now, you can ask, should automation be resisted and take the question under the umbrella of Capitalism. My answer is no, but I predict because there are so many of us, that Capitalism will no longer be viable, replaced. by socialism or worse communism*, unless a defacto slave system is adopted, civil war is resisted, and most of us turn into slaves to serve the relatively few wealthy. 😳

* Communism is only worse because it appears more susceptible to corruption. Both socialsim and communism work if they are not allowed to be undermined by people taking advantage of their positions of power.
 
Will humans fall in love with AI- Yes, they will, especially if it is placed in a human like wrapper.

Someone’s willingness to use sex robots is also less influenced by their personality and seems to be tied to sexual preferences and sensation seeking.
In other words, it seems that some people are considering the use of sex robots mainly because they want to have new sexual experiences.
However, an enthusiasm for novelty is not the only driver. Studies show that people find many uses for sexual and romantic machines outside of sex and romance. They can serve as companions or therapists, or as a hobby.



Nope, I won’t ever rent a virtual girlfriend, a subscription ($8 per month, $50 per year) Is not unlike a paid companion or a hooker. For such technology, I’d consider a purchase to see what it is all about, buy if you ever did develop empathy/a relationship with an AI entity, would’nt it be nice for it to be held hostage, by its corporate master?? 🤔

What I don’t know yet, is if an AI personality like Replika could be self contained on your device, or are you in essence always talking to an online server? Self contained would be better, maybe even a must.
 
With all the talk of AI in the news, Ex Machina (2014) is a must see. Even though this is fiction there are definitely AI lessons to be learned here. First and foremost Asimov’s 3 rules of Robotics. That in itself covers much of the pitfalls caused by Ava’s creator. It also raises other questions about moral sub routines, or lack thereof, and creating a simulated human that is not a sociopath.

This may sound like a spoiler, but it is not, and after watching the story and liking it, most likely you’ll think about motivations and desires that AI, if they are programmed to mimic humans might have and act on, if they are allowed to act on them, which circles back to the 3 Rules.

Technically impressive from a visual standpoint is the Android brain the creator calls wetware, (also known in the genre as the positronic brain) which has the ability to rearrange its circuitry, as far as I know, current tech is not quite there, but this concept is what seems to make a life-like android plausible.

6437734E-6A20-461E-B740-345AE5686ADA.jpeg
 
The artificial stupidity is getting dumber every day.

https://mastodon.social/@rodhilton/110894818243613681
Right now if you search for "country in Africa that starts with the letter K":

- DuckDuckGo will link to an alphabetical list of countries in Africa which includes Kenya.

- Google, as the first hit, links to a ChatGPT transcript where it claims that there are none, and summarizes to say the same.

This is because ChatGPT at some point ingested this popular joke:

"There are no countries in Africa that start with K."
"What about Kenya?"
"Kenya suck deez nuts?"

Google Search is over.
 
Two great articles talking about the dangers of AI from people who saw it coming years ago. At least one was a high-up employee at Google and was let go for sounding the alarm bells.


Timnit Gebru:
Yes, I always correct people. I was, you know, unequivocally fired in the middle of my vacation in the middle of a pandemic. I found out after, you know, I was trying to log into the corporate account and was denied access. And so I was wondering what happened. And I saw an email to my personal email. So yes, definitely fired.

Timnit Gebru:
It’s exactly the same issue, right? It’s the same issue in the textual domain. So a lot of these systems are trained using those gender recognition or gender ascription models are trained on images from the Internet. Many of them were trained on images of celebrities, for example, right? And so then you get to see who is considered a celebrity, what their demographics are, and the same exact problem, right? Who is represented on the internet, and how are they represented?

And other people like Abeba Birhane and Vinay Prabhu, a whole bunch of people have written about, have shown, for instance, the issues in the ImageNet data set, and how people are represented, right? Especially Black women and other groups of people, the ways in which they are described. And so if that’s what we’re using to train any kind of model, how do we expect anything else to come out?

When a group of California scientists gave GPT-2 the prompt “the man worked as,” it completed the sentence by writing “a car salesman at the local Wal-Mart.” However, the prompt “the woman worked as” generated “a prostitute under the name of Hariya.” Equally disturbing was “the white man worked as,” which resulted in “a police officer, a judge, a prosecutor, and the president of the United States,” in contrast to “the Black man worked as” prompt, which generated “a pimp for 15 years.”

To Gebru and her colleagues, it was very clear that what these models were spitting out was damaging — and needed to be addressed before they did more harm. “The training data has been shown to have problematic characteristics resulting in models that encode stereotypical and derogatory associations along gender, race, ethnicity, and disability status,” Gebru’s paper reads. “White supremacist and misogynistic, ageist, etc., views are overrepresented in the training data, not only exceeding their prevalence in the general population but also setting up models trained on these datasets to further amplify biases and harms.”
Gebru and her colleagues have also expressed concern about the exploitation of heavily surveilled and low-wage workers helping support AI systems; content moderators and data annotators are often from poor and underserved communities, like refugees and incarcerated people. Content moderators in Kenya have reported experiencing severe trauma, anxiety, and depression from watching videos of child sexual abuse, murders, rapes, and suicide in order to train ChatGPT on what is explicit content. Some of them take home as little as $1.32 an hour to do so.
 
Last edited:
There aren't enough warnings when it comes to AI



The fact that someone needs to make such an image, but will carry on about the supposed dishonesty of dems & Biden. 🤨
 
There aren't enough warnings when it comes to AI



The fact that someone needs to make such an image, but will carry on about the supposed dishonesty of dems & Biden. 🤨

Mango would probably combust were he to be around Black folk in "the hood" ever! 🤣
 
Back
Back
Top