Google doesn’t want its employees using Bard code • The Register

AI in brief Google has warned its own employees not to disclose confidential information or use the code generated by its AI chatbot, Bard.

The policy isn’t surprising, given the chocolate factory also advised users not to include sensitive information in their conversations with Bard in an updated privacy notice. Other large firms have similarly cautioned their staff against leaking proprietary documents or code, and have banned them using other AI chatbots.

The internal warning at Google, however, raises concerns that AI tools built by private concerns cannot be trusted – especially if the creators themselves don’t use them due to privacy and security risks.

Cautioning its own workers not to directly use code generated by Bard undermines Google’s claims its chatbot can help developers become more productive. The search and ads dominator told Reuters its internal ban was introduced because Bard can output “undesired code suggestions.” Issues could potentially lead to buggy programs or complex, bloated software that will cost developers more time to fix than if they didn’t use AI to code at all. 

Microsoft-backed voice AI maker sued

Nuance, a voice recognition software developer acquired by Microsoft, has been accused of recording and using people’s voices without permission in an amended lawsuit filed last week. 

Three people sued the firm, and accused it of violating the California Invasion of Privacy Act – which states that businesses cannot wiretap consumer communications or record people without their explicit written consent. The plaintiffs claim Nuance is recording people’s voices in phone calls with call centers, who use its technology to verify the caller.

“Nuance performs its voice examination entirely in the ‘background of each engagement’ or phone call,” the plaintiffs claimed. “In other words, Nuance listens to the consumer’s voice quietly in the background of a call, and in such a way that consumers will likely be entirely unaware they are unknowingly interacting with a third party company. This surreptitious voice print capture, recording, examination, and analysis process is one of the core components of Nuance’s overall biometric security suite.”

They argue that recording people’s voices exposes them to risks – they could be identified when discussing sensitive personal information – and means their voices could be cloned to bypass Nuance’s own security features. 

“If left unchecked, California citizens are at risk of unknowingly having their voices analyzed and mined for data by third parties to make various determinations about their lifestyle, health, credibility, trustworthiness – and above all determine if they are in fact who they claim to be,” the court documents argue.

The Register has asked Nuance for comment. 

Google does not support the idea of new federal AI regulatory agency

Google’s DeepMind AI lab does not want the US government to set up an agency singularly focused on regulating AI.

Instead, it believes the job should be split across different departments, according to a 33-page report [PDF] obtained by the Washington Post. The document was submitted in response to an open request for public comment launched by the National Telecommunications and Information Administration in April. 

Google’s AI subsidiary called for “a multi-layered, multi-stakeholder approach to AI governance” and supported a “hub-and-spoke approach” – whereby a central body like NIST could oversee and guide policies and issues tackled by numerous agencies with different areas of expertise. 

“AI will present unique issues in financial services, health care, and other regulated industries and issue areas that will benefit from the expertise of regulators with experience in those sectors – which works better than a new regulatory agency promulgating and implementing upstream rules that are not adaptable to the diverse contexts in which AI is deployed,” the document states.

Google DeepMind’s view differs from other companies including OpenAI and Microsoft, policy experts, and lawmakers who support the idea of building an AI-focused agency to tackle regulation.

Microsoft rushed to release the new Bing despite OpenAI’s warnings

OpenAI reportedly cautioned Microsoft about releasing its GPT-4-powered Bing chatbot too quickly, considering it could generate false information and inappropriate language.

Bing shocked users with its creepy tone and sometimes manipulative or threatening behaviour when it launched. Later, Microsoft restricted conversations to prevent the chatbot going off the rails. OpenAI had previously urged the tech titan to hold back on releasing the product to work on its issues.

But Microsoft didn’t seem to listen and went ahead anyway, according to the Wall Street Journal. That wasn’t the only conflict between the AI advocates, however. Months before Bing was launched, OpenAI released ChatGPT despite Microsoft’s concerns it could steal the limelight away from its AI-powered web search engine.

Microsoft has a 49 per cent stake in OpenAI, and gets to access and deploy the startup’s technology ahead of rivals. Unlike with GPT-3, however, Microsoft doesn’t have exclusive rights to license GPT-4. At times, this can make things awkward – OpenAI will often be courting the same clients as Microsoft or other businesses that are directly competing with its investor. 

Over time, this could make their relationship rocky. “What puts them on more of a collision course is both sides need to make money,” Oren Etizoni, ex-CEO of the Allen Institute for Artificial Intelligence, said. “The conflict is they’ll both be trying to make money with similar products. ®


Source link