#CaliBernication
#CaliBernication @brooklynnygirl ·
#AIWarning⚠️⚠️⚠️ CEO Says He’ll Hire Anyone Who Can Vibe Code With AI, Regardless of Actual Skill "We're very much now looking for people who are much more within that vibe coding space." futurism.com/artificial-int…
CEO Says He'll Hire Anyone Who Can Vibe Code With AI, Regardless of Actual Skill

According to one of his executives, the "Diary of a CEO" podcast host Steven Bartlett views vibe-coding as a key skill for new hires.

From futurism.com
12
I Am Sarah Femme
I Am Sarah Femme @am_femme51751 ·
What did you think would happen? Time to listen to the experts or God’s warning in “I Am Sarah Femme: The Sequel” #idonotconsent #ai #aiwarning #ainews
ControlAI ControlAI @ControlAI ·
AI Governance and Safety Canada's Executive Director Wyatt Tessari L'Allié tells Canadian MPs that we're already starting to see AI loss of control incidents, and that experts warn AI poses a risk of human extinction. He recommends Canada spearhead talks to prevent the threat.
11
Lewis
Lewis @LewisMushaka ·
Warning: Without AI, your deployment cycles lag. Competitors using AI will innovate quicker. #AIWarning
6
Social Security Whisperer aka Greenspaceguy
Social Security Whisperer aka Greenspaceguy @greenspaceguy ·
Replying to @DiligentDenizen
@DiligentDenizen #AIwarning Iwatinh
Nav Toor Nav Toor @heynavtoor ·
🚨BREAKING: OpenAI published a paper proving that ChatGPT will always make things up. Not sometimes. Not until the next update. Always. They proved it with math. Even with perfect training data and unlimited computing power, AI models will still confidently tell you things that are completely false. This isn't a bug they're working on. It's baked into how these systems work at a fundamental level. And their own numbers are brutal. OpenAI's o1 reasoning model hallucinates 16% of the time. Their newer o3 model? 33%. Their newest o4-mini? 48%. Nearly half of what their most recent model tells you could be fabricated. The "smarter" models are actually getting worse at telling the truth. Here's why it can't be fixed. Language models work by predicting the next word based on probability. When they hit something uncertain, they don't pause. They don't flag it. They guess. And they guess with complete confidence, because that's exactly what they were trained to do. The researchers looked at the 10 biggest AI benchmarks used to measure how good these models are. 9 out of 10 give the same score for saying "I don't know" as for giving a completely wrong answer: zero points. The entire testing system literally punishes honesty and rewards guessing. So the AI learned the optimal strategy: always guess. Never admit uncertainty. Sound confident even when you're making it up. OpenAI's proposed fix? Have ChatGPT say "I don't know" when it's unsure. Their own math shows this would mean roughly 30% of your questions get no answer. Imagine asking ChatGPT something three times out of ten and getting "I'm not confident enough to respond." Users would leave overnight. So the fix exists, but it would kill the product. This isn't just OpenAI's problem. DeepMind and Tsinghua University independently reached the same conclusion. Three of the world's top AI labs, working separately, all agree: this is permanent. Every time ChatGPT gives you an answer, ask yourself: is this real, or is it just a confident guess?