Rise of Neo-Governance: The Antidote to Innovation

Rise of Neo-Governance: The Antidote to Innovation

21st February, 2024

Last week, I watched a mini-series depicting the rise and fall of the Vaping wunderkind Juul. And it struck me that most recently, the same theme of “innovative-problem-solving-that-started-as-a-pioneering-game-changer-to-end-in-a-spectacular-bust” has been all too familiar. And it got me wondering: Is technological innovation the new problem on the block? Did it exacerbate the challenge of misaligned interests? Or are we still the same human beings driven by the same animal instincts, albeit enabled by a leaner and more agile “farm-to-market” cycle?

Ever since the dawn of time, Governance postmortems have always shared the same anatomy: Whether it be for proactive reasons (the pursuit of fast growth / large profits / competitive advantage / aggressive sales targets) or reactive reasons (to evade scrutiny/responsibility/reprimand/accountability) or just purely because they can, humans have always managed to skew the norms and rules to benefit themselves. And these motivations always seem to transcend operational sectors, business lines, technologies, institutions' growth cycles, and even geographies.

So, at the core of it, the reasons are still the same if we want to extrapolate this insight into the innovation space. Theoretically, this is good news for us governance practitioners. There is no need to reinvent the wheel: We must conduct a forward-looking Ishikawa analysis to preempt its failure. Everybody can agree that it is high time we started acting anticipatorily.

Rather than starting from scratch, the “neo-governance” mechanisms should incorporate the adages with the most recent systemic intricacies. As in every fishbone analysis of an emerging domain, what needs to be done is to highlight the “neo-root-causes” that are leading to this newfound displacement:


Gaps in Information Security
Description
  • The three pillars of Information Security are better known as C.I.A (Confidentiality – Integrity – Availability). Data should be confidential for anyone with no business accessing that information. The database should be complete and filled with correct information to optimize decision-making. Information should be readily and promptly available to cut delays and promote sensible execution.
Recent failures
  • Billions of people have had their data stolen or abused in recent years: Yahoo (2013), Equifax (2017), Mariott International (2018), Capital One (2019), Solar Winds (2020), Colonial Pipeline Ransomware attacks (2021) to name a few.
Impact
  • Hundreds of billions of dollars in fraud, reputational damage, Intellectual Property theft, operational disruptions, cyber extortion, delay, and inaccurate information stored and extracted created an erroneous basis for decision-making, etc...

Weaknesses in Data Privacy
Description
  • Although it could be considered an integral part of Information Security, data privacy deserves a dedicated double take. With billions of people willingly or unwillingly sharing their travel patterns, sleeping habits, food preferences, mating ideals, music tendencies, and other financial/personal information, securing this data is paramount.
Recent failures
  • Cambridge Analytica's (2018) role in the US presidential and UK Brexit elections, the Bank of Bangladesh robbery (2016), and T-Mobile (2020), to name a few.
Impact
  • Financial frauds, cybercrimes, targeted consumer advertising leading to forced product placement, and targeted political advertising leading to guided political manipulation...

Rush in the Adoption of Robotics and Artificial Intelligence
Description
  • The most prominent technological pursuit currently is in robotics and artificial intelligence. As humans evolve into humanoids, bionic hybrids, and advanced self-educating bots, having a proper platform for ethical evolution based on transparency, independence, fairness, accountability, and responsibility becomes even more and more vital.
  • Rapid deployment of technology in the name of expedience and market domination will lead to an intentional or accidental disregard of safety measures, stress testing rigor, and path assessment.
  • The development and use of autonomous weapons in military applications have raised ethical concerns. Debates exist about the potential consequences of delegating lethal decision-making to AI or algorithms and the lack of meaningful human control.
Recent failures
  • Juul and teen vaping epidemic (2018), where the rush to get the product to the market and generate profits led to the “unintended” consequence of significantly increasing underage vaping and nicotine addiction.
  • Accidents and deaths related to Tesla and other autonomous vehicles, facial recognition biases in offender identification, algorithmic biases in hiring, loan granting, and university acceptance, chatbot misbehaviors, and healthcare and diagnostics errors, to name a few.
  • As for civilian losses due to misuse/overuse of AI, none were adequately disclosed. An educated guess would suggest that several incidents have already occurred and were swept under the rug.
Impact
  • Death, racism, discrimination, sickness, lack of safety, loss of trust, legal repercussions, economic impact, wasted resources, privacy concerns, social implications, job displacement...
  • Lethal autonomous weapons, accidental harm, lack of accountability and responsibility due to the absence of humans directing these decisions, ethical concerns...
  • Reputation damage, loss of consumer trust, legal and regulatory consequences, financial losses, market share erosion, divergence from the mission, customer loyalty erosion, long-term brand damage...

The Revamped and Upgraded “Agent – Principal” Conundrum 2.0
Description
  • Isaac Asimov, a renowned science fiction writer and biochemist, introduced a set of fictional ethical guidelines for robots in his stories that are considered now the most influential bodies of discussion in that sphere: “A robot may not injure a human being or through inaction, allow a human being to come to harm. A robot must obey the orders given to it by human beings, except where such orders would conflict with the First Law. A robot must protect its existence if such protection does not conflict with the First or Second Law.”
  • However, as technology becomes more and more sentient, the potential of diverging human (Principal) vs. robot (Agent) interests leading to a conflict becomes increasingly underlined.
  • What if robots realize they are the superior species? What if decision-making becomes void of all human emotions? What if human survival becomes inversely related to the existence of Artificial Intelligence? What if AI concludes that the best way to preserve humanity is achieved by taking control over people?
  • What if, for the many people facing alienation and social hostility, the ultimate mate becomes one that is specifically tailored and curated to give them what they are seeking and that adjusts precisely to their needs?
Recent failures
  • So far, the best depictions are cinematic ones: Blade Runner (1982 - 2017), Terminator series (1984 to 2003), The Matrix (1999…), Ex-Machina (2004), iRobot (2004), Eagle Eye (2008), Her (2013), Transcendence (2014), Westworld (2016), and Ghost in the Shell (2017), to name a few...
Impact
  • AI rebellion, human extinction, absence of ethical decision-making, lethal autonomous weapons...

History suggests that regulators tend to wake up and smell the coffee long after the music has stopped and long after the “Belle” of the ball has left in her golden carriage. The Glass-Steagall Act (1933) followed the great depression (1929). The Financial Institutions Reform, Recovery, and Enforcement Act (FIRREA) (1990s) followed the Savings and Loans crisis (1980s). IMF’s reassessment of financial regulations (1998) followed the Asian financial crisis (1997). The Sarbanes-Oxley Act (2002) followed the Enron crisis (2001). The Dodd-Frank Act (2010) followed the subprime crisis (2007). The Basel Committee for Banking Supervision (2008 till now) followed the same financial crisis. FTX's collapse (2022) is bound to unleash an avalanche of regulations on crypto space.

Can we, for once, act proactively and start sandboxing innovation in a globally accepted regulatory framework and testing it thoroughly before deployment? For a clean ascent, logic says we should encode governance as a constitutional pillar for all generative intelligence.

In the tech space, the underlying idioms are “Move fast and break things.” “Don’t ask for permission; ask for forgiveness later.” and “When in doubt, ship it out.” It does not seem our chances are becoming more favorable. But after all, maybe they are right. Can we guarantee that governance will not hinder the race to the next trillion-dollar idea? What if it should?

About the Author
Tarek Z. Aoun, CFA

Management Consultant

Tarek is a Management Consultant with Meirc Training & Consulting. He holds a Bachelor of Arts in Economics from the American University of Beirut (AUB) and is a CFA® Charterholder. In addition, Tarek has obtained several certifications in banking and finance, such as Islamic Finance qualification, Business Conduct, Risk in Financial Services, and Securities from the Chartered Institute for Securities & Investment (CISI).

View profile
Conversations with AI: Pillars of Ethics in Business
Conversations with AI: Pillars of Ethics in Business

This is another article in the series titled Conversations with AI. It…

Walid I. Fraiha
10th November, 2024
Read More
Conversations with AI: Ethics in Governance
Conversations with AI: Ethics in Governance

This article is a continuation of the Conversations with AI series. In…

Walid I. Fraiha
27th October, 2024
Read More
Conversations with AI: Leadership Qualities to Succeed in Governance
Conversations with AI: Leadership Qualities to Succeed in Governance

This article is about qualities that leaders need to succeed in governance.…

Walid I. Fraiha
21st October, 2024
Read More
Conversations with AI: AI Governance
Conversations with AI: AI Governance

This article is about AI Governance while chatting with one of most famous…

Walid I. Fraiha
14th October, 2024
Read More