The Fret - AI 101

The Fret

Overviews of Current AI Conversations

Overview. The Pomp. The Buzz. The Fret.

AI 101: Article 3.3 “The Fret” by Bart Niedner, 29 June 2023

The Fret Surrounding AI

The Fret

The fret surrounding Artificial Intelligence (AI) is concerning. And for good reason — AI’s power to quickly transform our world must be carefully managed. Let’s look at some important, trending AI conversations.

AI and Job Transformation

The advent of artificial intelligence (AI) is like the discovery of electricity. On the one hand, it’s a breakthrough that promises a brighter future, but on the other hand, it sparks apprehensions about the unknown changes it will bring. Experts are split on AI and automation’s impact on the job market, much like the debates on whether electricity would create more jobs or render many obsolete.

In a world increasingly influenced by AI, some jobs face the risk of displacement, comparable to candle makers in the wake of electric lights. Self-driving vehicles and drones could sideline truck drivers and delivery staff, and tireless, efficient robots might replace assembly line workers. Likewise, AI-powered chatbots stand ready to take the baton from human customer service representatives. Data entry and analysis tasks are also moving from human hands to AI-driven systems, which could cause job losses in these areas.

However, every technological revolution brings new opportunities, just as electricity paved the way for countless new professions and industries. AI is anticipated to open new career paths too. We already need AI trainers and explainers to educate AI systems and interpret their decisions, much like we needed electricians to harness the emerging power of electricity. Increased data generation calls for more data analysts, while the rise in cyber threats calls for a stronger line of cybersecurity defense. As robots and AI become commonplace in our workplaces, professionals adept at facilitating human-robot collaboration will be in high demand. And professionals ensuring ethical and responsible development and use of AI will be increasingly paramount.

Yet, as the job landscape evolves, the ability of our education system to adapt and prepare students for these changes is under scrutiny. Much of the scrutiny focuses on the rate of change expected to be ushered in with AI. Can human education keep up with the necessary adaptation? As we once moved from candle-making to understanding electric circuits, we must now focus on nurturing skills that coexist with AI technology rather than compete against it. But will we be able to do it fast enough to mitigate the social upheaval it will inevitably bring?

The intersection of AI and job displacement evokes mixed opinions, much like the initial responses to electricity. What’s undeniable, however, is that AI and automation are already reshaping many industries, akin to how electricity revolutionized our world. The key lies in revisiting our education system and focusing on skills harmonizing with AI to help individuals adapt to the changes and minimize potential negative impacts.

The AI Privacy-Security Circus Act

Integrating AI into our lives is like inviting a talented performer into our personal circus. The performer, artificial intelligence (AI), is skillful and entertaining, capable of impressive feats. However, without the right control, they can become a loose cannon. Many of the issues have existed for a long time without direct association with AI. However, AI’s use of vast data and persuasiveness elevates these risks.

Consider your sensitive personal data as a precious circus animal. The performer, AI, can be the skilled animal trainer, protecting and managing it. However, if a nefarious ringmaster gets hold of AI, it could turn into a rogue animal, causing havoc. By 2030, we expect an entire troupe of AI-based security products to take the stage, hoping to tame this wild animal with a projected industry worth of $133.8 billion. Yet, just as talented, rogue performers could use their skills for ill, cybercriminals can employ AI to break the cage, craft deceptive acts, or produce constantly evolving tricks.

To continue our analogy, picture facial recognition technology as a circus fortune teller. In an ideal world, they recognize you, offer personalized insights, and enrich your experience. But what if they start sharing your whereabouts and routines with others without your knowledge? This unauthorized disclosure mirrors the unease around facial recognition, which has been used in places to track individuals without their consent, stirring up privacy and civil liberties issues.

Likewise, envision AI as a perceptive juggler in our circus who is masterfully passing random personal items like your watch, wallet, and eyeglasses through the air. When this juggler respects your property, you get a memorable, personalized performance. But what happens when they juggle your information too freely, sharing it with other spectators? This unintended access echoes concerns like the Cambridge Analytica scandal, where data juggling went awry, and private Facebook data was improperly used.

AI can also be like a skilled illusionist, conjuring up believable audio and video acts. However, if a mischievous illusionist uses these tricks to deceive rather than entertain the audience, it becomes a problem. This malfeasance is similar to the security threats AI poses when used to create convincing fake information to spread misinformation or discredit people.

Incorporating AI into our lives is a circus act of high stakes, balancing potential and peril. Ensuring that our AI performers are properly managed as the show unfolds is vital to prevent the performance from becoming a fiasco. Governments and organizations must become vigilant ringmasters, ensuring that AI performs responsibly, safeguarding our privacy, and protecting our security.

AI Safety and Responsible AI

Let’s look at artificial intelligence (AI) as a new roommate moving into your home. Much like a good roommate who chips in with chores and contributes to a happier home, AI is already improving efficiency at work and making our lives easier. Experts believe that AI will soon be involved in nearly everything we do, from helping us find love and grocery shopping to all aspects of transportation, health decisions, and sorting the day’s important news. Leaning further into our analogy, our ‘AI roommate’ may quickly become a ‘digital best friend,’ deeply involved in many important areas of our lives.

But as with any new roommate, privacy must be respected and boundaries set. In the same way, as AI becomes more embedded in our lives, researchers are working hard to ensure that it follows principles for safe and responsible use — a set of ‘house rules.’

Google AI has developed its own robust set of house rules that guide its AI behavior. Their policies include avoiding creating or reinforcing bias, being accountable if things go wrong, and being clear about what data it uses and how. This transparent set of agreed rules encourages AI to be a beneficial and considerate roommate.

However, industry self-regulation is notoriously self-serving at times. Global experts and leaders are stressing the need for international and federal regulations and laws to ensure AI is both safe and responsible. Europe’s ‘Ethical Guidelines for Trustworthy AI’ and the recent G7 commitment to establish the “Hiroshima AI Process” by the end of 2023 are excellent examples of our good-roommate analogy delivered at a global scale.

However, building a fair and inclusive AI system can be challenging, similar to managing different personalities and needs in a shared living space. For example, AI must learn not to favor or discriminate against anyone, just like a good neighbor would avoid bias or favoritism in the larger community. Researchers are tackling these issues by analyzing data for biases and training AI models to treat everyone equally.

Regarding safety, think of AI like a friendly neighbor borrowing your chainsaw to tend to an unruly tree. Self-driving cars use AI to make decisions on the road, just like your neighbor needs to use the borrowed chainsaw safely. AI must be programmed with fail-safes to prevent accidents, much like a good roommate would ensure they know how to turn off appliances after use.

With AI entering our lives, we face new ethical dilemmas. These issues are present in simple everyday tasks, larger societal issues, and existential decisions we face. It’s like deciding who does the dishes or takes out the trash – simple tasks require fairness and accountability. And if AI is used in policing, how can we ensure it doesn’t unfairly target certain groups? Or, when AI assists in personalized healthcare, how do we respect the privacy of individuals while still providing the best care? And what governance is required to ensure run-away AI development does not turn Artificial Super Intelligence (ASI) into the nightmare of many science fiction stories?

As we invite AI, our new roommate, into our lives, we must establish house rules for safety, fairness, and respect. As AI becomes increasingly part of the family, continuous conversation and adaptation of these rules will help us live in harmony and mutual benefit.

Artificial Super Intelligence (ASI) & the Technical Singularity

Imagine you’re in a science fiction movie where robots have become smarter than humans. Suddenly, everything changes. Technology starts improving so quickly that we can’t keep up, and it feels like we’re lost in a blizzard of new inventions. This narrative is the essence of what some people call the “technological singularity” — a moment when artificial intelligence (AI) surpasses human intelligence.

But how realistic is this ‘robot revolution’? Is it a credible future we should prepare for or just an entertaining plot for sci-fi fans?

Our current AI technology, called narrow AI (ANI), is more like a smart intern than a superintelligent being. It’s great at specific tasks, like recognizing pictures of cats or predicting the next word in a sentence, but it’s far from a genius that could regularly outsmart us across various topics. Even though we’ve made incredible strides in AI technology (like the development of ‘deep learning,’ where computers can learn from experience), we are still a long way from a robot with human-level intelligence (artificial general intelligence or “AGI”).

Moreover, AI doesn’t have feelings or desires. While a movie character might dream of world domination, real AI just follows the rules it’s given. It’s like a very advanced calculator: it can solve problems but doesn’t ‘want’ anything. It just processes data and follows its programming.

However, we should still pay attention to potential risks. For instance, what if our ‘smart calculator’ gets so smart it starts making decisions without us, like optimizing a factory at the expense of worker safety? Or what if it develops technology so complicated that humans can’t understand it anymore? These scenarios are worth pondering and preparing for.

The most immediate concern is the impact of AI on jobs. As AI gets smarter, it will take over tasks currently done by humans, leading to job loss. It’s like a wave of automation, simultaneously washing away traditional roles while creating new ones. The challenge lies in ensuring that people can adapt and learn new skills as quickly as the AI wave advances, to prevent significant job losses and growing economic inequality.

While the idea of a robot revolution or superintelligent AI makes for exciting dinner table conversation, it’s essential to separate facts from fiction. There are risks associated with AI’s rise, but they’re not necessarily the stuff of sci-fi horror. Careful planning, thoughtful regulation, and ongoing dialogue can help us navigate these issues as AI evolves. After all, as we continue to develop AI, we’re writing the script for this movie, and it’s up to us to steer it toward a happy ending.

The “Statement on AI Risk” Signed by the AI Experts

Mitigating the risk of extinction from AI should be a global priority alongside other societal-scale risks such as pandemics and nuclear war.

– Open Letter by AI Experts

Leaders in the field of artificial intelligence (AI), including heads of OpenAI, Google DeepMind, Anthropic, and renowned academic researchers, have warned about the existential risks posed by AI to humanity. They have likened these risks to societal-scale threats like pandemics and nuclear war. In a statement endorsed by over 350 AI executives, researchers, and engineers, including Sam Altman, Demis Hassabis, Dario Amodei, Geoffrey Hinton, and Yoshua Bengio, the experts assert that mitigating the risk of extinction from AI should be a global priority.

Of concern to these signatories are potential AI disaster scenarios, including the weaponization of AI, the spread of AI-generated misinformation, power concentration, oppressive censorship through pervasive surveillance, and enfeeblement where humans become dependent on AI. Despite these concerns, some experts, such as Yann LeCun from Meta and Arvind Narayanan from Princeton University, believe these apocalyptic warnings are exaggerated and distract from near-term AI-related issues such as system biases.

Other experts worry about the near-term consequences of AI advancements, such as the exacerbation of biases, discrimination, and misinformation, as well as the widening of the digital divide and increased inequality. However, the Center for AI Safety’s director, Dan Hendrycks, underscores that addressing current AI issues can also contribute to mitigating future risks.

The existential threat posed by AI gained significant media attention since March 2023, when experts, including Elon Musk, signed an open letter urging a halt to developing next-generation AI technology. The new campaign issued by AI leaders is designed to stimulate further discussion on this topic, comparing the risk to that posed by nuclear war.

High-profile figures such as Sam Altman and Google’s CEO Sundar Pichai are actively discussing these issues with world leaders, emphasizing the need for safe and secure AI development while acknowledging the technology’s potential benefits. Discussions about AI risks have also occurred at the recent G7 summit in Japan, which established a ministerial forum called the “Hiroshima AI Process” to advance discussions encompassing AI governance, intellectual property rights, transparency, and other relevant issues related to the use and adoption of generative AI. For this purpose, the new working group will engage the OECD group of developed countries (37 democracies with market-based economies) and the Global Partnership on Artificial Intelligence (a 2020 partnership between the G7, Australia, India, Mexico, New Zealand, and South Korea). The “Hiroshima AI Process” forum is expected to be formed by the end of this year.

Your Role in The AI Discussion

Please engage in the ongoing AI discussion here and elsewhere. It is essential to remain informed, curious, and open to its potential. Let’s explore the possibilities of AI together! What are your thoughts? What would you like to explore?

About the “AI 101” Article Series

AI-RISE articles in the AI 101 series are introductory material for anyone who wants accurate, conversational knowledge of this important technology shaping our world. This article makes the discussion more accessible to someone new to the AI conversation.


Article by Bart Niedner

Image of author.

All hail our technological overlords!
Now, where did I put my eyeglasses?!

— Bart Niedner

Bart Niedner, a versatile creative, embarks on a journey of discovery as he delves into both novel writing and the intriguing realm of AI-assisted writing. Bart warmly welcomes you on this journey from novice to master as he leverages his creative abilities in these innovative domains. His contributions to AI-RISE and BioDigital Novels reflect AI collaboration and exploratory work – the purpose of these websites.

About Bart Niedner


“Get Your Geek On!” (Related Reads)

  1. McClurg, John. “AI Ethics Guidelines Every CIO Should Read.” InformationWeek, 7 Aug. 2019, www.informationweek.com/ai-or-machine-learning/ai-ethics-guidelines-every-cio-should-read#.
  2. Department of Industry, Science and Resources. “Australia’s AI Ethics Principles.” Australia’s Artificial Intelligence Ethics Framework | Department of Industry, Science and Resources, 2022, www.industry.gov.au/publications/australias-artificial-intelligence-ethics-framework/australias-ai-ethics-principles.
  3. Talagala, Nisha. “AI Ethics: What It Is and Why It Matters.” Forbes, 31 May 2022, www.forbes.com/sites/nishatalagala/2022/05/31/ai-ethics-what-it-is-and-why-it-matters/?sh=605228883537.
  4. Blackman, Reid. “A Practical Guide to Building Ethical AI.” Harvard Business Review, 15 Oct. 2020, hbr.org/2020/10/a‑practical-guide-to-building-ethical-ai.
  5. “Responsible AI Principles From Microsoft.” Microsoft, www.microsoft.com/en-us/ai/responsible-ai?activetab=pivot1:primaryr6.
  6. Baxter, Kathy. “Managing the Risks of Generative AI.” Harvard Business Review, 6 June 2023, hbr.org/2023/06/managing-the-risks-of-generative-ai.

Encourage Participation

“The fret surrounding AI is concerning. Its power to quickly transform our world must be carefully managed. Let’s look at some important, trending AI conversations.”

Interested?


Featured Image

Image Cre­ation Remarks

“The Golden Era of AI” conjures a retro-futurist style in my mind. I thought the style was perfect to unify the featured images for the AI 101 Article 3 posts: “Pomp. Buzz. Fret”. They were great fun to make in Midjourney.

Retro-futurism is a design and artistic movement that combines elements of nostalgia for old-fashioned aesthetics with futuristic technology and concepts. It emerged in the 1940s and 1950s, but gained popularity in the 1970s and 1980s. The style often features elements of Art Deco, Space Age, and Atomic Age design, with a focus on sleek lines, geometric shapes, and bold colors. However, I see quite a bit of it resurging again over the past decade.

This image of a “scary AI overlord” seemed nastolgic, holding the perfect bit of overkill for an important and very real concern.

Mid­jor­ney Prompt

“retro-futuristic image representing a scary AI overlord”

Post­pro­cess­ing

None.


3 thoughts on “The Fret

  1. Pingback: Current AI Conversations - AI-RISE Blog

  2. Pingback: The Pomp Surrounding AI - AI-RISE Blog

  3. Pingback: The Buzz Surrounding AI - AI-RISE Blog

Leave a Reply

Your email address will not be published. Required fields are marked *