Our website publishes news, press releases, opinion and advertorials on various financial organizations, products and services which are commissioned from various Companies, Organizations, PR agencies, Bloggers etc. These commissioned articles are commercial in nature. This is not to be considered as financial advice and should be considered only for information purposes. It does not reflect the views or opinion of our website and is not to be considered an endorsement or a recommendation. We cannot guarantee the accuracy or applicability of any information provided with respect to your individual or personal circumstances. Please seek Professional advice from a qualified professional before making any financial decisions. We link to various third-party websites, affiliate sales networks, and to our advertising partners websites. When you view or click on certain links available on our articles, our partners may compensate us for displaying the content to you or make a purchase or fill a form. This will not incur any additional charges to you. To make things simpler for you to identity or distinguish advertised or sponsored articles or links, you may consider all articles or links hosted on our site as a commercial article placement. We will not be responsible for any loss you may suffer as a result of any omission or inaccuracy on the website.

Should concerns over ChatGPT justify a pause on AI?

by Staff GBAF Publications Ltd
0 comment

Should concerns over ChatGPT justify a pause on AI?

 

By Neil Murphy, Chief Sales Officer at ABBYY

Some global tech leaders have called for a pause in AI following the release of ChatGPT, but is halt in development the right answer to addressing these concerns? Neil Murphy Chief Sales Officer of intelligent automation company ABBYY shares his take on how AI can progress in the midst of growing ailments.

AI impacts every area of our lives. From powering the world’s businesses, to assisting us in everyday tasks like navigation, searching information, and accessing our devices, we rely so much on AI that putting a stop to it isn’t viable.

This doesn’t mean that there aren’t real issues that need to be addressed, and the sooner countries regulators act, the better. 

There are many things that fit under the general umbrella of Al, and ChatGPT is one which is still highly experimental. People are currently quite willing to post text or other types of information that might be regarded confidential. Because it’s unclear where or how the data is being used, many companies are prohibiting employees from using ChatGPT, while some nations have already imposed restrictions.

Privacy concerns over the use of personal data prompted the Italian Data Protection Authority to temporarily block the use of ChatGPT nationwide. There was also an apparent lack of precautions to protect underaged children from inappropriate responses. 

In a complaint filed to the Federal Trade Commission (FTC), the Center for AI and Digital Policy argued that ChatGPT failed to meet guidance around transparency and explainability of AI systems. It referenced acknowledgements ChatGPT made to several known risks such as violating users’ privacy rights, creating damaging content, and disseminating false information.

As we’ve seen, there are flaws to large-scale language models such as ChatGPT notwithstanding its utility. Because the underlying AI model is based on deep leaning algorithms that leverage large training data sets from the internet, there is a possibility of inaccuracy. Furthermore, ChatGPT has been found to amplify bias resulting in responses that might be discriminatory against certain racial and gender minority groups, something which the company is trying to mitigate. Additionally, ChatGPT may attract criminal behaviour, threatening to compromise the safety and privacy of vulnerable users. 

These concerns over ChatGPT and similar chatbots prompted the European Consumer Organisation (BEUC) to release a statement urging an investigation into large-scale language models and their detrimental effects. Since then, the European Parliament published a commentary reinforcing the need to strengthen the current provisions of the draft EU Artificial Intelligence Act, (AIA). This is becoming closer to a reality as the draft Act recently received approval.

However, the rate at which AI is developing is much faster than the process of regulatory approvals. Moreover, pressure from the public may mean that the scope may not be as wide as it needs to be. There have been calls to broaden the regulation to cover a distinct, high-risk class of general-purpose AI systems, forcing developers to conduct much more stringent conformance testing before releasing systems into the market, and continuously checking them for any harmful outputs. 

According to this study, the shortcoming of the AIA is that it focuses primarily on conventional AI models and not new and future generations of AI which we are already starting to see. Based on the study, regulators should consider the following strategies.

Reporting 

Companies that develop AI should be requires to submit periodical reports documenting what risk management procedures they are taking and how effective they are. It is also important that greenhouse gas emissions are measured and reported in this process. This will enable regulatory agencies and other organisations to make accurate assessments of the sustainability impact of AI, and what strategies can be used to mitigate it.

Transparency 

Any use of large-scale language models to create content needs to be disclosed by businesses to their customers. This will not only help to establish an industry-wide standard of transparency, but will help organisations build trust with customers and the public. What needs to happen next is the creation of technical measures that will enforce this transparency and rule out any opportunities for nefarious activity. 

Risk Management 

There needs to be a comprehensive framework for risk management. Codes of conduct by the community are an organic way of evaluating risks, but it shouldn’t be the only precaution. Formal risk management should be applied to developers in limited, staged releases. This will help safeguard businesses and the public from unexpected harm, while ensuring developers can still maintain their competitive advantages. 

Data Audits

When it comes to biases and discrimination in data, the responsibility should be on humans rather than AI to control. Developers should pro-actively audit training data sets for misrepresentations or inaccurate data, especially when it affects minority groups. This will help them implement mitigation measures at the root of the problem so that no discriminatory data will propagate further down the line.

The desire of innovators to get first mover advantage by taking an “act first and fix later” business model contributes to the hazards associated with disruptive technologies like AI. OpenAI has released ChatGPT for widespread commercial usage with a “buyer beware” warning on users to weigh the risks themselves, despite being reasonably clear about potential flaws. Given the widespread use of conversational AI systems, that method isn’t the most effective. When dealing with such a disruptive technology, proactive regulation and strong enforcement mechanisms are the best way forward.

In today’s world, the presence and effects of AI are ubiquitous. This means a pause on development could have far-reaching repercussions. Rather than abruptly pumping the breaks, stakeholders from the legislative and business communities should work together in good faith to develop actionable regulation grounded in human-centric principles like responsibility, transparency, and fairness. Leaders from public and private sectors need to commit to comprehensive, standardised regulations that will prevent malicious uses and minimise negative results. This will help keep AI within the confines of enhancing human experiences.