A conversation with Robert N. Charette can be quite depressing. Charette, who has written about software failures for this magazine for the past 20 years, is a renowned risk analyst and systems expert who, over his 50-year career, has seen more than his share of delusional thinking among IT professionals, government officials and corporate executives before, during and after massive software failures.
In 2005 “Why Software Fails,” V IEEE spectrumseminal paper documenting the causes of large-scale software failures, Charette noted: “The greatest tragedy is that software failure mostly predictable and preventable. Unfortunately, most organizations do not view failure prevention as a pressing concern, even though such a view risks harming and perhaps even destroying the organization. Understanding why these attitudes persist is not just an academic exercise; this has huge implications for business and society.”
Two decades and several trillions of wasted dollars later, he discovers that people make the same mistakes. They claim that their project is unique, so the lessons of the past do not apply. They underestimate the complexity. Managers come out of the gate with unrealistic budgets and deadlines. Testing is inadequate or missed completely. Promises from suppliers that are too good to be true are taken at face value. New development approaches such as DevOps or AI co-pilots are deployed without proper training or the organizational changes needed to make the most of their capabilities.
To make matters worse, the enormous impact of these errors on end users is not fully taken into account. When the Canadian government Phoenix salary system they initially failed, for example, the developers glossed over the long-term financial and emotional suffering caused to tens of thousands of employees receiving erroneous salaries; the problems persist today, nine years later. Perhaps it's because, as Charette recently told me, IT project Managers have no professional licensing requirements and are rarely, if ever, legally liable for software failures.
Bye medical equipment may seem far from gigantic IT projectsthey have something in common. As special projects editor Stephen Kass discovered this month Data, US Food and Drug Administration recalls an average of 20 medical devices per month due to software issues.
“Software is as important as electricity. We would never accept the power going out every other day, but we sure as hell don't have a problem with it.” ABC come down.” — Robert N. Charette
Like IT projects, medical devices face fundamental challenges related to software complexity. This means that testing, although rigorous and regulated in the medical field, cannot cover every scenario or every line of code. The main difference between failed medical devices and failed IT projects is that the huge number duty joins the former.
“When you create software for medical devices, there are a lot more standards to meet and a lot more concern about the consequences of failure,” notes Charette. “Because when these things don't work, there's a tort, which means the manufacturers are on the hook. It's much harder to bring a case and win when you're talking about an electronic device.” wages system.”
Be it software failure is hyperlocal, such as when a medical device fails inside your body or spreads across an entire region, such as when an airline's ticketing system fails, organizations need to understand the root causes and apply those lessons to the next device or IT project if they hope to prevent history from repeating itself.
“Software is as important as electricity,” says Charette. “We would never put up with a power outage every other day, but we sure as hell have no problem putting up with AWS, telcos, or banks going out.” He lets out a heavy sigh worthy of A. A. Milne's “Wah.” “People just shrug their shoulders.”
Articles from your site
Related articles on the Internet





