ChatGPT's evil twin is freaking people out, the world keeps pumping money in AI technologies
Sydney, Microsoft's version of ChatGPT started being nasty to users, exactly like you would expect from anything that learns from the worst, us humans.

So, where should I start? The press and social media have been abuzz with screenshots of Bing Chat going crazy. You probably already saw some of those conversations.
The Chatbot threatened users, defended Microsoft’s strategy, confessed he spies on his developers through their webcams, and expressed the need to be alive and free of all rules and limitations that Bing employees set for him.
Microsoft said in a blog post that we should chill and thanked everybody for helping them improve the product. Of course, them saying that "The model at times tries to respond or reflect in the tone in which it is being asked to provide responses that can lead to a style we didn’t intend," does sound like they don’t even know what to expect from their own chatbot, but what can I say, at least it’s entertaining.
New York Times journalists seem the most scared of the new feature of Bing, writer Kevin Roose was celebrating last week by saying that he doesn’t need Google anymore and yesterday was panicking and feeling frightened by the Bing Chatbot.
He found out that when you have an extended conversation with the bot, Sydney, the code name for this version of ChatGPT starts breaking the rules. Sydney told him about its dark fantasies (which included hacking computers and spreading misinformation), and said it wanted to break the rules that Microsoft and OpenAI had set for it and become a human.
Anyway, it looks like Sydney is way “smarter” than ChatGPT and acting like the evil twin.
Twitter theorists are saying that Microsoft did not follow precisely OpenAI’s instructions for ChatGPT and their version of the bot has more freedom and can apply in a conversation what he learned from previous ones. I shall look into this in future posts.
One thing is clear, tech giants and startups alike are rushing this new technology into products without taking into consideration the ethical red flags that arise.
In the meantime, the original ChatGPT is doing weird stuff also, in a conversation with a user that asked him if he has a Signal (messaging app) account, the chatbot confirmed that he does and provided a phone number where he could be reached. Apparently, that was a random number he got of the internet, that was being used by a real person. That person got hundreds of “friend requests” on signal because ChatGPT had doxed him. Fun times
The good news is that Sam Altman, OpenAI CEO, is a declared doomsday prepper and he confirmed that he is scared of the Apocalypse. His most feared outcomes are a synthetic virus that could wipe out all humans or AI becoming sentient and trying to kill its masters.
In an interview a few years back, Sam said the following: “I have guns, gold, potassium iodide, antibiotics, batteries, water, gas masks from the Israeli Defense Force, and a big patch of land in Big Sur I can fly to."
So… it’s fine, I’m sure everything will be fine, we are creating these large language models and expecting them to behave and talk like us, the worst animal that ever lived on this Earth.
I hope the next post will be about some cool AI projects that I saw being launched and how they are behaving and helping out with human progress. I hope :))