Close Menu
    What's Hot

    PTI Urges Centre: Sincere Dialogue Needed Now

    June 30, 2025

    امن کا خواہاں پاکستان اگلے ماہ اقوام متحدہ کی سلامتی کونسل کی صدارت سنبھالے گا

    June 26, 2025

    پنجاب اور خیبرپختونخوا کے مختلف شہروں میں بارش، موسم خوشگوار۔

    June 26, 2025
    Facebook X (Twitter) Instagram
    Facebook X (Twitter) Instagram
    DotX PakistanDotX Pakistan
    • Home
    • Pakistan
    • Entertainment
    • Politics
    • Sports
    • Development
    • Business
    • World
    • Videos
    • Create Post
    • Login
    • اردو نیوز
    DotX PakistanDotX Pakistan
    Home»Tech»‘Godfather of AI’ now fears it’s unsafe. He has a plan to rein it in
    Tech June 9, 2025

    ‘Godfather of AI’ now fears it’s unsafe. He has a plan to rein it in

    Dot XBy Dot X5 Mins Read0 Views
    Facebook Twitter LinkedIn Telegram Pinterest Tumblr Reddit WhatsApp Email
    Share
    Facebook Twitter LinkedIn Pinterest Email

     

    solitude
    Credit: Unsplash/CC0 Public Domain

    This week, the US Federal Bureau of Investigation revealed two men suspected of bombing a fertility clinic in California last month allegedly used artificial intelligence (AI) to obtain bomb-making instructions. The FBI did not disclose the name of the AI program in question.

    Advertisement

    This brings into sharp focus the urgent need to make AI safer. Currently we are living in the “wild west” era of AI, where companies are fiercely competing to develop the fastest and most entertaining AI systems. Each company wants to outdo competitors and claim the top spot. This intense competition often leads to intentional or unintentional shortcuts—especially when it comes to safety.

    Coincidentally, at around the same time of the FBI’s revelation, one of the godfathers of modern AI, Canadian computer science professor Yoshua Bengio, launched a new nonprofit organization dedicated to developing a new AI model specifically designed to be safer than other AI models—and target those that cause social harm.

    So what is Bengio’s new AI model? And will it actually protect the world from AI-facilitated harm?

    An ‘honest’ AI

    In 2018, Bengio, alongside his colleagues Yann LeCun and Geoffrey Hinton, won the Turing Award for groundbreaking research they had published three years earlier on deep learning. A branch of machine learning, deep learning attempts to mimic the processes of the human brain by using artificial neural networks to learn from computational data and make predictions.

    Bengio’s new nonprofit organization, LawZero, is developing “Scientist AI.” Bengio has said this model will be “honest and not deceptive,” and incorporate safety-by-design principles.

    According to a preprint paper released online earlier this year, Scientist AI will differ from current AI systems in two key ways.

    First, it can assess and communicate its confidence level in its answers, helping to reduce the problem of AI giving overly confident and incorrect responses.

    Second, it can explain its reasoning to humans, allowing its conclusions to be evaluated and tested for accuracy.

    Interestingly, older AI systems had this feature. But in the rush for speed and new approaches, many modern AI models can’t explain their decisions. Their developers have sacrificed explainability for speed.

    Bengio also intends “Scientist AI” to act as a guardrail against unsafe AI. It could monitor other, less reliable and harmful AI systems—essentially fighting fire with fire.

    This may be the only viable solution to improve AI safety. Humans cannot properly monitor systems such as ChatGPT, which handle over a billion queries daily. Only another AI can manage this scale.

    Using an AI system against other AI systems is not just a sci-fi concept—it’s a common practice in research to compare and test different level of intelligence in AI systems.

    Adding a ‘world model’

    Large language models and machine learning are just small parts of today’s AI landscape.

    Another key addition Bengio’s team are adding to Scientist AI is the “world model” which brings certainty and explainability. Just as humans make decisions based on their understanding of the world, AI needs a similar model to function effectively.

    The absence of a world model in current AI models is clear.

    One well-known example is the “hand problem”: most of today’s AI models can imitate the appearance of hands but cannot replicate natural hand movements, because they lack an understanding of the physics—a world model—behind them.

    Another example is how models such as ChatGPT struggle with chess, failing to win and even making illegal moves.

    This is despite simpler AI systems, which do contain a model of the “world” of chess, beating even the best human players.

    These issues stem from the lack of a foundational world model in these systems, which are not inherently designed to model the dynamics of the real world.

    On the right track—but it will be bumpy

    Bengio is on the right track, aiming to build safer, more trustworthy AI by combining large language models with other AI technologies.

    However, his journey isn’t going to be easy. LawZero’s US$30 million in funding is small compared to efforts such as the US$500 billion project announced by US President Donald Trump earlier this year to accelerate the development of AI.

    Making LawZero’s task harder is the fact that Scientist AI—like any other AI project—needs huge amounts of data to be powerful, and most data are controlled by major tech companies.

    There’s also an outstanding question. Even if Bengio can build an AI system that does everything he says it can, how is it going to be able to control other systems that might be causing harm?

    Still, this project, with talented researchers behind it, could spark a movement toward a future where AI truly helps humans thrive. If successful, it could set new expectations for safe AI, motivating researchers, developers, and policymakers to prioritize safety.

    Perhaps if we had taken similar action when social media first emerged, we would have a safer online environment for young people’s mental health. And maybe, if Scientist AI had already been in place, it could have prevented people with harmful intentions from accessing dangerous information with the help of AI systems.

    Advertisement
    Share. Facebook Twitter Pinterest LinkedIn Tumblr Telegram Email
    Previous ArticleUrgent policy actions needed to address real AI threats, scientist reveals
    Next Article Apple heads into annual showcase reeling from AI missteps, tech upheaval and Trump’s trade war
    Dot X
    • Website

    Advertisement

    Related Posts

    Turning trash into treasure: How microwaves are revolutionizing e-waste recycling

    June 10, 2025

    Nasa says asteroid could hit moon in 2032

    June 10, 2025

    Tracking apps monitor remote employees’ performance—and invade their privacy

    June 10, 2025
    Top Posts

    PM Shehbaz to depart for UAE visit tomorrow

    June 11, 2025

    ‘We’re not your colony’: CM Sindh demands fair share in federal spending

    June 14, 2025

    PM Pushes for Swift EV Policy to Cut Fossil Fuel Dependence

    June 15, 2025
    Newsletter

    Get the latest news from Dot X Pakistan about Current Affairs, Politics, Sports, Entertainment and more. Let's stay updated!

    Advertisement
    Facebook X (Twitter) Instagram Pinterest
    • Home
    • About Us
    • Account
    • Create Post
    • Todays Paper
    • Contact Us
    • اردو نیوز
    © 2025 Dot X. All right reserved.

    Type above and press Enter to search. Press Esc to cancel.

    Ad Blocker Enabled!
    Ad Blocker Enabled!
    Our website is made possible by displaying online advertisements to our visitors. Please support us by disabling your Ad Blocker.