Overviews of Current AI Conversations
Overview. The Pomp. The Buzz. The Fret.
AI 101: Article 3.2 “The Buzz” by Bart Niedner, 28 June 2023

The Buzz
The buzz surrounding Artificial Intelligence (AI) is deafening. And for good reason — AI is transforming almost every area of our lives. Let’s look at some important, trending AI conversations.
AI Governance and Regulation
Picture AI governance and regulation as the traffic rules guiding a fast-paced city where autonomous cars dominate the roads. These rules ensure that all vehicles operate safely and ethically, promoting a harmonious traffic flow. Likewise, governments and organizations are striving to implement AI governance frameworks that promote transparency, accountability, compliance, and innovation.
For example, consider Japan’s approach to AI regulation as a city installing advanced traffic lights that adapt based on real-time traffic conditions. Their agile, risk-based approach adapts to the ever-changing landscape of AI, which has the potential to guide global consensus-building. In contrast, the European Commission has established a ‘driving manual’ with its Ethical Guidelines for Trustworthy AI, providing comprehensive rules of conduct for AI systems on their roads.
However, designing these traffic rules for AI is a complex task. The challenges arise from AI’s versatility, which can present distinct issues in each application. An increasing focus is being placed on understanding how these AI ‘vehicles’ make decisions and their potential for amplifying biases. While the goal is to maintain a smooth traffic flow without impeding the pace of innovation, it’s crucial to set up regulations to ensure fairness and trust in these AI ‘vehicles.’
The G7 Annual Summit held in Japan May 19–21, 2023, addressed the requirement for global action. In a joint statement, the leaders of G7 (US, Germany, Britain, France, Japan, Italy, Canada, and the EU) expressed their consensus to collaborate in advancing international discussions on inclusive AI governance and interoperability. Trustworthy AI is the ultimate goal. Additionally, the explosion of interest and development over the past half-year in generative AI was of special interest to the G7. The group formally recognized generative AI’s significant opportunities and consequential challenges. It was agreed to establish a ministerial forum called the “Hiroshima AI Process” to advance discussions encompassing AI governance, intellectual property rights, transparency, and other relevant issues related to the use and adoption of generative AI. For this purpose, the new working group will engage the OECD group of developed countries (37 democracies with market-based economies) and the Global Partnership on Artificial Intelligence (a 2020 partnership between the G7, Australia, India, Mexico, New Zealand, and South Korea). The “Hiroshima AI Process” forum is expected to be formed by the end of this year.
Ethical Considerations
Picture AI as a powerful potion concocted by a wizard. It can perform incredible feats, akin to the enchantments in fairy tales, but if brewed incorrectly or used carelessly, it might have unintended harmful side effects. AI’s magic, while invaluable in industries such as healthcare, banking, and manufacturing, comes with ethical questions that cannot be overlooked. Among these are privacy intrusions, embedded biases, and risks stemming from insufficient regulations.
Like a wizard’s potion that requires specific ingredients to work its magic, AI uses extensive data and sophisticated algorithms to deliver its services. For instance, it aids healthcare professionals in diagnosing diseases, helps bankers manage complex financial transactions, and accelerates product development in manufacturing. And like a potion that can lighten the wizard’s workload, AI can boost productivity in the workplace by automating tasks and freeing employees for more complex duties.
However, without careful ethical oversight, using AI can be like drinking a potent but untested potion. It can infringe on privacy by collecting and using personal data without consent. It can perpetuate biases if the data it’s trained on reflects societal prejudices, leading to unfair decisions in areas like hiring or loan approval. And without sufficient regulation, AI can make decisions that have a broad societal impact, such as determining who gets access to essential services, without human accountability. As we tap into AI’s magic, we must also play the careful wizard, ensuring we mix the potion correctly and use it responsibly to prevent undesirable side effects.
Bias in AI
Think of AI as a mirror, reflecting not just an individual but an entire society. Like a mirror, AI learns to reflect the world based on the data it sees. However, if the mirror is smudged or distorted, it can give a skewed reflection. In the same way, when AI is trained on biased data, it could echo these biases, leading to flawed and potentially harmful outcomes.
For instance, consider two cases that have made headlines: Google’s image recognition system and Amazon’s recruitment tool. In the former, the AI misidentified images of Black people as gorillas because it was primarily trained on data involving lighter-skinned individuals, demonstrating a clear “smudge” on the mirror. Similarly, Amazon’s AI recruiting tool favored male candidates for technical roles since it was trained on resumes submitted to the company over a 10-year period, predominantly from men. This result was like a “distortion” in the mirror, reflecting a gender bias in the tech industry.
As we move forward, we aim to clean and correct these “AI mirrors” to ensure they reflect our world fairly and accurately. This effort includes implementing technical improvements, adopting ethical guidelines, and promoting transparency in AI decision-making. The objective is to build AI systems that, like a good mirror, provide a clear, unbiased reflection, fostering fairness and inclusivity.
Explainability and AI Trust
Picture AI technologies as a team of skilled chefs in a bustling, high-tech kitchen. They create various dishes, but to truly enjoy and trust what you’re eating, you must understand the ingredients used and the process followed. Similarly, as AI finds its way into various sectors, there’s a rising need for these systems to be explainable to users and those impacted by their decisions. The Organization for Economic Cooperation and Development (OECD) supports governments by measuring, analyzing, and making understandable the “recipes” and impacts of AI technologies.
Building upon this analogy, we can consider the call for a global AI learning campaign akin to offering cooking classes for everyone interested in understanding the art and science of AI ‘cooking.’ This initiative aims to ensure that varying AI methods — much like different cooking techniques — require unique forms of explanation. Just as chefs must balance taste, presentation, and nutritional value, AI system design must reconcile competing demands. Transparency and explainability are the foundational steps toward creating trustworthy AI systems akin to listing all ingredients and steps in a recipe.
The COVID-19 pandemic, much like an unexpected influx of customers, has hastened the use of AI in various sectors. New policies and regulations regarding data and AI governance have signaled a shift from self-regulation, akin to a kitchen deciding its hygiene practices, to formal oversight, similar to health inspections. Trust in these AI ‘chefs’ is paramount and relies on four pillars: integrity, explainability, fairness, and resilience. As such, businesses using AI should adopt an AI risk management framework, much like following food safety protocols, to ensure holistic AI governance.
The OECD AI Policy Observatory serves as an information hub providing resources and real-time analysis and dialogue to shape and share AI policies across the globe.
AI Research and Development
Think of Artificial Intelligence (AI) as the new electricity — a technology rapidly becoming a part of nearly every aspect of our lives. Like electricity, it’s sparking new efficiencies and expanding human capacities. From boosting productivity at work and enhancing entertainment in our leisure time to simplifying tasks at home — AI is making waves. However, just as the widespread use of electricity brought about painful changes, such as job losses in industries like candle-making, the rise of AI brings similar concerns, especially for roles involving repetitive tasks.
Other worries, such as algorithmic bias and the ethics surrounding AI development and use, are significant. (Think of algorithmic bias as a mirror reflecting our societal biases.) There’s also the issue of transparency and “explainability” — understanding how AI makes its decisions and awareness of AI’s liability. Who is accountable when AI makes a mistake? Yet, despite these growing pains and with the caveat of appropriate regulation to assure human values, most experts see the glass as half-full rather than half-empty. They believe that AI’s ability to make real-time data-driven decisions and learn and adapt intelligently over time will continue to enhance our lives.
Imagine self-driving cars navigating city streets, facial recognition software identifying people in a crowd, and chatbots assisting you with online shopping — these are not scenes from a sci-fi movie but real-world examples of AI at work. And this is just the very beginning.
There’s a lot of buzz around several technical areas of AI, including natural language processing (this is what allows Siri or Alexa to understand and respond to your commands), computer vision (it’s what helps self-driving cars ‘see’ and navigate), and deep learning (it’s like giving machines a simplified version of a human brain to learn from massive sets of data). These exciting areas are ripe with the potential to enhance and revolutionize various industries.
Experts suggest various strategies to balance the AI revolution with protecting human values. It’s like setting the rules of the road for AI. They recommend global and federal AI advisory committees capable of recommending policies and programs to prepare the workforce for AI-driven industries, broad regulations for AI applications, and penalties for misusing AI. And the possibilities for using AI to improve our decisions are immense, especially in sectors like education and healthcare. The key is to harness the power of AI while steering clear of the potential pitfalls, and that’s where the current discussion on AI research and development is concentrated.
AI and the New Digital Literacy
If we are unprepared for a world deeply intertwined with artificial intelligence (AI), we may feel as if we are alone on a boat without knowing how to navigate or sail. As you need sailing skills to handle the ship, we need AI skills to navigate our increasingly digital world. In sectors as varied as writing, illustration, healthcare, finance, and education, the impact of AI is already being felt, creating a demand for new skills.
In the writing and content creation world, think of AI as a tool like a more advanced word processor. It’s no longer just about correcting your grammar and typos; it can help create content — including news articles and social media posts. Like mastering a new writing technique, writers must learn to harness these AI tools to produce content more efficiently and creatively. As creatives engage these new AI tools and skills, they must also be highly literate and vigilant about ethical issues such as copyrights and use infringement.
Imagine the healthcare sector as a giant puzzle where each patient’s data is a piece. AI acts like a master puzzle solver, analyzing these pieces to detect potential health risks. Consequently, healthcare professionals must learn how to use this puzzle-solving tool effectively, ensuring they can provide personalized care while adhering to high privacy and ethical standards.
Now, consider the finance industry. AI in finance is like a skilled advisor who can assess and make investment decisions based on extensive data analysis. As AI takes on more tasks, financial professionals must learn how to use and supervise these AI advisors to make sound decisions and manage risks, including any risks associated with the liability of AI.
AI acts as a personal tutor in education, customizing learning experiences and providing individualized feedback. It’s like having a classroom assistant for each student. Educators must learn to incorporate these AI tutors into their teaching methods, enhancing education while maintaining ethical practices and assuring accuracy.
Tackling the AI skills gap requires a village. We need teachers, educational leaders, policymakers, researchers, and tech innovators to open dialogues about the necessary capabilities in an AI-driven world. We must develop new academic programs and resources to help learners acquire these skills — much like learning to sail in a world full of oceans.
Limitations of AI
AI can outperform humans in tasks with defined rules and strategies, such as chess, but when it comes to creating something as intricate and multifaceted as chess itself, AI falls short. It lacks the human touch of creativity and intuitive problem-solving at the heart of such ingenious inventions.
Imagine AI as a chef who can follow any recipe to perfection but cannot whip up a novel dish based on a whim or a sudden spark of creativity. While AI can perform many specific tasks better than humans, this illustrates that AI does not have the creative intuition or problem-solving flair humans do. This difference is the primary distinction between Artificial Narrow Intelligence (ANI – where AI is now) and Artificial General Intelligence (AGI – human-like AI).
AI also faces a rare and peculiar issue called ‘AI hallucination.’ Picture this: a child telling wild tales based on what it has read in fairy tales. It’s entertaining but often far from reality. Similarly, AI can sometimes generate false information based on the vast data it processes. This fiction can lead to incorrect decisions and ethical quandaries and erode people’s trust in AI systems. We need to be like careful parents, verifying these tall tales and encouraging responsible storytelling. “Verify AI!”
AI is getting very good at recognizing human emotions, much like a mirror can reflect our expressions. However, the mirror doesn’t comprehend why you’re smiling or how to respond if you’re upset. AI has the same limitation. It might detect an unhappy customer but may need help to grasp the why or the how to alleviate their concerns.
Then there’s the issue of bias, making AI akin to a parrot. It mimics the words it hears without understanding them. If an AI is trained on biased data, it will mirror that bias at scale, leading to unfair results. For instance, some facial recognition systems have shown a predisposition to misidentify people of color, underscoring this issue.
Finally, as we weave AI into the fabric of our daily lives, it becomes an attractive target for hackers, much like a jewel in a crown. Cybercriminals can exploit AI’s weaknesses to access massive troves of sensitive data or disrupt vital systems.
Despite these challenges, the promise of AI is vast and powerful. Envision a future where AI becomes a dependable ally, driving transformative advancements in healthcare, education, and manufacturing. It’s akin to a grandmaster poised over a chessboard, contemplating the game-changing move that could checkmate problems we’ve struggled with for generations. Each limitation we overcome brings us a move closer to this reality, making the endgame more tangible and exciting. The AI journey might be complex and intricate, much like a chess match, but the potential rewards make every move worth it.
Your Role in The AI Discussion
Please engage in the ongoing AI discussion here and elsewhere. It is essential to remain informed, curious, and open to its potential. Let’s explore the possibilities of AI together! What are your thoughts? What would you like to explore?
About the “AI 101” Article Series
AI-RISE articles in the AI 101 series are introductory material for anyone who wants accurate, conversational knowledge of this important technology shaping our world. This article makes the discussion more accessible to someone new to the AI conversation.
Article by Bart Niedner

All hail our technological overlords!
— Bart Niedner
Now, where did I put my eyeglasses?!
Bart Niedner, a versatile creative, embarks on a journey of discovery as he delves into both novel writing and the intriguing realm of AI-assisted writing. Bart warmly welcomes you on this journey from novice to master as he leverages his creative abilities in these innovative domains. His contributions to AI-RISE and BioDigital Novels reflect AI collaboration and exploratory work – the purpose of these websites.
“Get Your Geek On!” (Related Reads)
- “Ethics Guidelines for Trustworthy AI.” Shaping Europe’s Digital Future, 8 Apr. 2019, digital-strategy.ec.europa.eu/en/library/ethics-guidelines-trustworthy-ai.
- Annex 5: G7 Action Plan for Promoting Global Interoperability Between Tools for Trustworthy AI. www.g7.utoronto.ca/ict/2023-annex5.html.
- The OECD Artificial Intelligence Policy Observatory — OECD.AI. oecd.ai/en.
- AI Index Report 2023 – Artificial Intelligence Index. aiindex.stanford.edu/report.
- What Is Artificial Intelligence (AI) ? | IBM. www.ibm.com/topics/artificial-intelligence.
- Susan Fourtané, Susan. “Ethics of AI: Benefits and Risks of Artificial Intelligence Systems.” Interesting Engineering, 27 August 2020, https://interestingengineering.com/innovation/ethics-of-ai-benefits-and-risks-of-artificial-intelligence-systems.
Encourage Participation
Interested?
Featured Image
Image Creation Remarks
“The Golden Era of AI” conjures a retro-futurist style in my mind. I thought the style was perfect to unify the featured images for the AI 101 Article 3 posts: “Pomp. Buzz. Fret”. They were great fun to make in Midjourney.
Retro-futurism is a design and artistic movement that combines elements of nostalgia for old-fashioned aesthetics with futuristic technology and concepts. It emerged in the 1940s and 1950s, but gained popularity in the 1970s and 1980s. The style often features elements of Art Deco, Space Age, and Atomic Age design, with a focus on sleek lines, geometric shapes, and bold colors. However, I see quite a bit of it resurging again over the past decade.
I kept the Midjorney prompt short and generated about fifty results. Using the power of numbers I selected an appropriate fit from many outstanding generative images. This image of a futurist cityscape, full of bustle and movement, with its sleek lines and an “urban forwardness,” captures the notion of many converging agendas and ideas forming the buzz indicitive of human progress.
Midjorney Prompt
“retro-futuristic graphic representing several different advanced technologies, climate change, energy security, healthcare, economy, creativity, transportation”
Postprocessing
None.
Pingback: Current AI Conversations - AI-RISE Blog
Pingback: The Fret Surrounding AI - AI-RISE Blog
Pingback: The Pomp Surrounding AI - AI-RISE Blog