Connecting the Dots: What one more threat has to say about it

By JOHN BOS

Published: 05-26-2023 6:00 PM

We are facing a third threat to humankind in addition to nuclear warfare (short term) and climate change (long term). Artificial intelligence (AI) is a fast-evolving technology that can speed up the possibility of both of these threats.

I asked this new threat, ChatGPT, a bunch of questions below. ChatGPT’s responses were as follows.

My first question was: I am very concerned and worried about the increasing negative impacts on people and the environment caused by climate change. I think that corporate green washing is one of the causes of climate change denial. I think that financial support of politicians works against federal legislation that might mitigate climate change. My greatest recent concern is how artificial intelligence can be used to further strengthen these climate denial efforts and how that will expedite the negative impacts of climate change upon humankind.

ChatGPT: I understand your concerns about climate change and its various implications. It’s important to stay informed and gather reliable information to better understand the issues you are worried about.

ChatGPT then tells me that by researching scientific literature, books and documentaries and connecting with environmental organizations and academic institutions I’ll be better equipped to address my concerns. Yeah, sure.

Will climate deniers use AI to propagate those views?

ChatGPT: While it’s possible for individuals or groups with various viewpoints to use artificial intelligence (AI) to propagate their views, including climate change denial, it’s important to note that AI itself is a tool and its use depends on the intentions and actions of those who wield it.

“Those who wield it” is my fear.

Article continues after...

Yesterday's Most Read Articles

Driver taken to hospital after Interstate 91 rollover in Bernardston
Deerfield woman gets probation in stabbing case
Greenfield Police seek robbery suspect
My Turn: Biden’s record and accomplishments are extremely positive
Greenfield’s Lucas Allenby, Landon Allenby qualify for USASA Nationals
Springfield man gets 5 years in Greenfield shooting case

ChatGPT: AI can be used to spread misinformation or amplify certain narratives, including climate change denial. It can be employed to automate the generation of fake news articles, social media posts, or videos that promote misleading information about climate change.

Is there any way to regulate the ethical use of AI?

ChatGPT: Regulating the ethical use of AI is a complex and evolving challenge. As AI technologies continue to advance and permeate various aspects of society, it becomes increasingly important to establish frameworks and guidelines to ensure their responsible and ethical deployment.

Regulating AI ethically is a rapidly evolving field, and the development and implementation of effective regulations require continuous efforts, collaboration, and multidisciplinary approaches. Governments, industry, academia, and civil society all play crucial roles in shaping the future of AI regulation to ensure its ethical use.

What happens if there is no way to regulate the ethical use of AI?

Without regulation, it can be challenging to establish responsibility, transparency, and accountability in cases of AI system failures or harmful outcomes. 

 

Will the Doomsday Clock be influenced by AI?

ChatGPT: The “Doomsday Clock” is a symbolic representation of how close humanity is perceived to be to global catastrophe, particularly related to nuclear weapons and other existential threats. It’s important to note that the Bulletin of the Atomic Scientists primarily focuses on nuclear weapons, climate change, and emerging technologies in their evaluation.

AI technologies can be used in the development and deployment of military systems and the lack of human control over critical decisions are relevant factors that the Doomsday Clock takes into account.

The use of AI in critical infrastructure, cybersecurity, and other systems may introduce vulnerabilities and risks that could potentially lead to unintended consequences or accidents with global ramifications.

On the other hand, AI can also be used as a tool to address global challenges and promote peace and stability. AI applications in areas such as climate modeling, conflict resolution, or pandemic response may have positive implications for global security and factor into the overall assessment.

Who decides what the ethical use of AI should be?

ChatGPT: Determining the ethical use of AI involves multiple stakeholders and should ideally be a collaborative and inclusive process.

You say that “Ideally, decisions regarding the ethical use of AI should involve a diverse range of perspectives, taking into account social values, human rights, privacy concerns, fairness, and long-term societal impacts.” Missing from this diverse range of perspectives is money … the potential for making huge amounts of money by misleading or deliberately deceiving the public. Should we the people be concerned?

ChatGPT: You raise a valid point. In cases where profit-driven motives are prioritized over ethical considerations, there is a risk of misleading or deceiving the public for financial gain. This can manifest in various ways, including the spread of misinformation, manipulation of AI algorithms, or unethical business practices.

Does anyone believe there is no “risk of misleading or deceiving the public for financial gain?”

“Connecting the Dots” is published every other Saturday in the Greenfield Recorder. John Bos will be happy to send you a PDF with the unabridged 8-page transcript of his “dialogue” with ChatGPT. Simply write ChatGPT on the subject line in the email below. Comments, questions, and requests may be sent to john01370@gmail.com.

]]>