News/AI Safety
Facebook & Instagram Memories Fuel AI: Meta’s Controversial Training Data Plans Spark Privacy Debate
Meta is repurposing personal Facebook and Instagram posts as AI training data, raising concerns about the privacy implications of transforming digital memories into fodder for machine learning algorithms. Key points about Meta's AI training plans: Meta recently announced that public posts, photos and even names from Facebook and Instagram will be used to train AI models starting June 26th, effectively treating users' online histories as a time capsule of humanity: Private messages, posts shared only with friends, and Instagram Stories are excluded, but all other public content is fair game for Meta's AI. Europeans have been given a temporary reprieve...
read Jun 20, 2024YouTube Deepfake Dilemma: Navigating the Blurry Line Between Creativity and Deception
YouTube rolls out new process for removing AI-generated deepfakes, but challenges remain in determining what constitutes a violation. Key details of YouTube's AI deepfake removal process: The company has expanded its takedown request options for AI-generated content, allowing users to flag deepfakes for potential removal: YouTube will consider factors such as whether the AI content could be mistaken as real and if it falls under parody or satire when deciding to remove the flagged material. This move follows the company's initial announcement in November 2023 about its plans to address the growing concern of AI-generated deepfakes on the platform. Balancing...
read Jun 20, 2024Anthropic’s “Constitutional AI” Aligns AI with Human Values, Paving Way for Responsible Development
Anthropic's constitutional AI aims to align AI systems with human values, representing a significant advancement in the field of artificial intelligence and paving the way for more responsible and beneficial AI development. Key Takeaways: Anthropic, an artificial intelligence research company, has developed a new approach called "constitutional AI" that seeks to create AI systems guided by clear principles and values: Constitutional AI involves training AI models to behave in accordance with a set of predefined rules, values, and behaviors, similar to how a constitution guides a government. This approach aims to ensure that AI systems act in ways that are...
read Jun 20, 2024AI “Openness Gaps” Revealed: Study Finds Tech Giants Less Transparent Than Smaller Players
A new analysis of 40 large language models (LLMs) claiming to be "open source" found significant discrepancies in the level of openness provided by different developers, with smaller players generally being more transparent than tech giants: Key Takeaways: Researchers Mark Dingemanse and Andreas Liesenfeld created an openness league table assessing models on 14 parameters, revealing that many models described as "open source" fail to disclose important information about the underlying technology: Around half of the analyzed models do not provide any details about training datasets beyond generic descriptions. Truly open-source models should allow outside researchers to inspect, customize, and reproduce...
read Jun 19, 2024AI Pioneer Ilya Sutskever Launches New Company to Tackle the Most Critical Problem of Our Time: Safe Superintelligence
Ilya Sutskever, co-founder of OpenAI, launches new AI company focused solely on developing safe superintelligence, raising questions about the future of AI safety research and the competitive landscape. Key details of Sutskever's new venture: Safe Superintelligence Inc. (SSI) was founded just one month after Sutskever's departure from OpenAI, where he served as chief scientist: Sutskever co-founded SSI with ex-Y Combinator partner Daniel Gross and former OpenAI engineer Daniel Levy. The company's singular mission is to build "safe superintelligence" or "SSI", which they believe is the most important technical problem of our time. Unlike OpenAI's nonprofit origins, SSI is designed from...
read