On the flip side, the ability to deep-fake a face in real time on a video call is now accessible to pretty much every script kiddie out there.
In other words, you can no longer trust what your eyes see on video calls.
We live in interesting times.
Let me separate my face, body and words and craft the experience.
Like when they were brainstorming this as a product, what was the persona/vertical they were targeting?
A software engineer says to himself, if only I could keep these guns from jumping off the table and shooting people.
However, this really nails that pretty dead itself. Wonder if I can:
- Sit at home in pajamas.
- Change my face to Sec. of Def. Lloyd Austin.
- Put myself in a nice suit from TV
- Call the White House with autotune voice pretending to be going in for surgery yet again because of life threatening complications
- Send the entire military into conniptions (maybe mention some dangerous news I need to warn them about before the emergency rush surgery starts)
Edit: This [4] might be an Animate / Outfit anyone image... It's difficult to tell. Even with huge amounts of experience, the quality has become too elevated, too quick to check 1000's of depressing murder images for fakes because it might be a BS heart string story. All stories on the WWW are now, "that might be fake, unless I can personally check." Al-arabiya upvoted casinos and lotteries for muslims recently. [5] "they all might be fake."
[1] https://www.microsoft.com/en-us/research/project/vasa-1/
[2] https://humanaigc.github.io/emote-portrait-alive/
[3] https://humanaigc.github.io/animate-anyone/
[4] https://www.reuters.com/resizer/v2/https%3A%2F%2Fcloudfront-...
[5] https://english.alarabiya.net/News/gulf/2024/07/29/uae-grant...
I wonder, is there a universe where maybe cameras are updated to add some sort of digital signature to videos/photos to indicate they are real and haven't been tampered with? Is that feasible/possible? I'm not skilled with cryptography stuff to know, but if we can digital sign documents with some amount of faith...
I've heard folks mention trying to tag AI photos/videos, but it seems like tagging non-AI photos/videos is more feasible?
And I don’t say this with excitement.
And this is the worst quality it will ever be. In the future it will be impossible to know who we are talking with online.
I wonder how politics can be transacted in such an environment. Old-timey first-past-the-post might be the optimal solution if you can't trust anything from out of earshot.
It is already ez to run text troll AIs on normal workstations... so...
AI will kill the Internet we know today and the new one im guessing you will have to have a Internet license attached to your identity which is backed by your internet reputation which you always want to keep it high for veracity/validity! You can still post anonymously but it wont hold as much weight compared to you posting using your verified Internet identity. This idea of mine i posted good number of times here and it gets downvoted but with the IRS in bed with ID.Me (elon musk is involved with them in some capacity) you can see what i mention with ID.me and the IRS being a small step in this direction. Otherwise no one uses the Internet (zero trust of it) .. it dies and we go back to reading books and meeting in person (doesnt sound all that bad yet ive never read a book before).
But maybe no, it wouldn't. Maybe it'd be deeply disconcerting. We have very strong norms around honesty as a society, and maybe crossing them in video just for a joke is comparably crass to giving somebody a fake winning lottery ticket.
I've notice I've steadily become more ashamed to be associated with tech. I'm still processing how to react to this and what to choose to work on in response
Am I in a bubble? Do you share similar feelings or are yours quite different? I am very curious
Built-in checks prevent processing of inappropriate content, ensuring legal and ethical use."
I see it claims to not process content with nudity, but all of the examples on the website demo impersonation of famous people, including at least one politician (JD Vance). I'm struggling to understand what the authors consider 'ethical' deepfaking? What is the intended 'ethical' use case here? Of all the things you can build with AI, why this?