Welcome back to In the knowThe new Time Time newsletter twice a week about artificial intelligence. If you read this in your browser, why not subscribe to the next one to be delivered directly to your mailbox?
What to know: preparation for an emergency
The main catastrophe with AIS support is becoming more and more likely as the capabilities of AI are promoting. But new report From the London Analytical Center, it warns that the British government does not have extraordinary powers necessary to respond to disasters with AI support, such as critical infrastructure violations or a terrorist attack. The United Kingdom should provide its officials with new powers, including the opportunity to force technological companies to exchange information and limit public access to their models of artificial intelligence in an emergency, approves a report that was divided solely with the time preceding its publication in the center for long -term sustainability (CLTR). This is a model for the legislation of artificial intelligence, which can win not only in Britain, but also in other parts of the world with limited jurisdiction in relation to artificial intelligence companies.
Lack of levers – “Relying on the 20- or 50-year legislation, which has never been intended for such technologies, will not necessarily be the best approach,” says Tommy Shaffer Shein, CLTR director for AI and the author of the report. “We could find that if something really goes wrong, the government would try to find levers. They can pull them and discover that they are actually not attached to anything – and what is the action that they need, perhaps for several hours, simply does not happen. ”
Offers – The report, which was dedicated to the coincidence with the annual conference of the Management Labor Party this week, includes 34 proposals that CLTR will be included in the government of the government, which has long been planned. Along with granting the government, the authority to encourage technological companies to exchange information and withdraw access to models, proposals include the requirement that AI's companies report serious artificial intelligence safety incidents for the government, and officials conduct regular readiness exercises.
A new approach to AI regulation – If the British government accepts these proposals, it signals another approach to regulation of AI, than, for example, the European Union, a long law on AI, to regulate individual AI models. This EU law attracted contempt from Silicon Valley and Washington, in which influential figures claim that he suppressed innovations in the European technological industry and inflicted burdening burden for American artificial art companies. Under the second administration of Trump, this type of regulation is increasingly considered as synonymous with the fact that it is hostile in relation to the economic interests of the United States.
When refueling the needle – So, how can Britain regulate AI, maintaining access to the economic growth that he promises, and staying in good books of the United States? CLTR answer: Do not adjust the models themselves; Instead, be prepared for their consequences. “What we are talking about readily for emergency situations is the recognition that you will not have this type of intervention, [and] That you will have more dangerous models more widely expanded than you would ideally want, ”says Shafffer Shein. – And so the question is how you are preparing for this scenario? “
If you have a minute, please take ours quickly survey To help us better understand who you are, and what topics of AI interest you the most.
Who to know: Rodney Brooks, founder IROBOT
At the weekend, the loud name in the world of robotics – Rodney Brooks, co -founder of the company that brought you Roomba – published by Scathing essayHe claims that billions of dollars are currently investing in the construction of humanoid robots similar to the figure of Ai and Tesla in order to create a safe, dexterous and, consequently, useful humanoid.
The reason, according to Brooks, is associated with a limitation of how these robots are trained. The drawing and Tesla collect video dads of people performing actions and submit these data to the neural network. Brooks says that this approach is erroneous because it does not collect touches – the type of kinetic feedback, which, according to him, is necessary for the robot to find out how to be dexterous.
More money than before before, it is spent on the construction of robotics, a set of companies participating in races, which were the first to win what some consider the market worth a lot of trillion dollars. If these companies are right that the new possibilities of robotics can arise simply by scaling a video dowry (for example, on large language models), then the impact on the labor market and the economy will be huge. But if they scalate the wrong type of data, Brooks writes: “A lot of money has disappeared.”
AI in action
A 60-year-old man requested ChatGPT advice that to replace with a table salt to improve his diet study Published in a reviewed journal last month. ChatGPT proposed to change it to the sodium bromide. Over the next three months, he began to experience fatigue, red spots on his skin and difficulty walking. In the end, he was diagnosed with bromism – a syndrome that could lead to psychosis, hallucinations and even a coma. “This case … emphasizes how the use of artificial intelligence (AI) can potentially contribute to the development of preventive adverse health results,” the document says.
In a statement of time, Openai said that ChatgPT is not intended for use in the treatment of any health state and is not a replacement for professional consultation. The company also states that it has trained its AI systems to encourage people to look for a professional leadership.
As always, if you have an interesting story about AI in action, we would like to hear this. Write to us at: [email protected]
What we read
Openai, Nvidia and Oracle: Destruction of Betting by $ 100 billion on AgiPeter Wildford on Supack
The best forecast Peter Wildford analyzes circular transactions such as Openai, Oracle and NVIDIA to finance the data processing design, and makes an observation that they, in fact, turn the entire S&P500 into the AGI lever, which have been arriving in the next few years, with catastrophic consequences, if this is not so. He writes:
“The Reason We Shoup Be Somewhat Concerned – Or at Least Curious – ABOUT THIS Infinite Money IS Twofold. FIRSTLY, Agi Might LEAD to the Serious Destation of EVERYTHING WEA VALUE AND LOVE, if not the Entire Human Race. Approximatly 7% of the S&P 500's Total Market Capitalization. value based on the transformation of artificial intelligence, approximately on schedule.
“In other words, Agi occurs in the near future, it can mean the end of humanity, but at least S&P 500 will remain strong. On the other hand, if the hypothesis of artificial intelligence scaling will reach unexpected walls, unwinding can be the second “DOT COM BUST” or worse. And suddenly 25% S&P 500 are in a free fall. ”