News/Gov Tech

Aug 10, 2024

California Launches AI Education Program with NVIDIA

California launches pioneering AI education initiative: The State of California has announced a groundbreaking collaboration with NVIDIA to provide AI education resources to universities, community colleges, and adult education programs across the state. The big picture: This public-private partnership aims to support California's workforce training and economic development goals by equipping students and educators with essential skills in generative AI. The initiative recognizes the growing importance of AI across all sectors and California's responsibility to prepare its workforce for the future. NVIDIA, a world leader in AI computing, will provide resources and training to help California educators and students gain...

read
Aug 10, 2024

Senators Probe AI’s Role in Social Security Benefit Decisions

Senators probe Social Security Administration's AI use: Senators Ron Wyden and Mike Crapo have requested information from the Social Security Administration (SSA) regarding its implementation of artificial intelligence in eligibility and payment decisions. The bipartisan inquiry, submitted to SSA Commissioner Martin O'Malley, seeks details on the agency's AI risk management frameworks, personnel qualifications, and processes for expediting disability determinations and appeals. The senators emphasized the SSA's crucial role in distributing over $1 trillion in Social Security benefits and Supplemental Security Income payments annually to millions of beneficiaries. The deadline for the SSA to provide the requested information is September 3....

read
Aug 10, 2024

The Implications of California AI Safety Bill SB1047

California's proposed AI legislation, SB1047, is sparking intense debate in Silicon Valley, pitting safety advocates against those concerned about stifling innovation. The bill, which would require makers of large AI models to certify their safety and include safeguards, has passed the state's Senate Judiciary Committee and now faces further scrutiny. The big picture: California's proposed Safe and Secure Innovation for Frontier Artificial Intelligence Models Act aims to establish regulatory guardrails for AI development, reflecting growing concerns about potential risks associated with advanced AI systems. The bill would mandate that companies developing large AI models certify their safety, implement a kill...

read
Aug 8, 2024

UK’s £59M AI Safety Project Attracts Top Talent

The UK government's £59 million Safeguarded AI project, aimed at developing an AI system to verify the safety of other AIs in critical sectors, has gained significant traction with the addition of Turing Award winner Yoshua Bengio as its scientific director. This initiative represents a major step in the UK's efforts to establish itself as a leader in AI safety and foster international collaboration on mitigating potential risks associated with advanced AI systems. Project overview and objectives: The Safeguarded AI project seeks to create a groundbreaking "gatekeeper" AI capable of assessing and ensuring the safety of other AI systems deployed...

read
Aug 7, 2024

UK AI Strategy Shift Sparks Debate on Global Tech Competitiveness

The UK government's recent decisions regarding AI funding have sparked debate about the country's commitment to technological advancement and global competitiveness in the field of artificial intelligence. Funding announcement and policy shift: The UK government has unveiled a £32 million investment in nearly 100 cutting-edge AI projects across various sectors, while simultaneously scrapping £1.3 billion in previously promised funding for large-scale tech and AI initiatives. The £32 million will support 98 projects spanning diverse areas such as construction site safety and prescription delivery efficiency, benefiting over 200 businesses and research organizations. This new investment comes in stark contrast to the...

read
Aug 7, 2024

AI Startup Aims to Transform $759 Billion Government Contracting Industry

AI startup Sweetspot has secured $2.2 million in seed funding to revolutionize the government contracting process, aiming to make the $759 billion industry more accessible through its AI-powered platform. The big picture: Sweetspot's innovative approach to government contracting combines a comprehensive search engine with user-friendly software, positioning itself as a "TurboTax for government contracts" in a traditionally complex and opaque industry. The startup offers a search engine covering federal, state, and local contracts, alongside software to assist companies in applying for and tracking progress in the federal procurement process. Sweetspot's service is available at two price points: $720 per year...

read
Aug 7, 2024

Secretaries of State Demand Action in Grok Misinformation Spreading

Election officials raise alarm over AI chatbot misinformation: Five secretaries of state have voiced serious concerns about Elon Musk's AI chatbot Grok spreading election misinformation on X, formerly known as Twitter. The officials sent a letter to Musk on Monday, highlighting that Grok provided incorrect ballot deadlines which were subsequently shared across social media platforms, potentially reaching millions of users. The false information persisted for 10 days before being corrected, raising questions about the speed and efficacy of error detection and correction mechanisms in AI-powered information systems. Minnesota Secretary of State Steve Simon emphasized the critical importance of voters receiving...

read
Aug 3, 2024

NTIA Recommends Monitoring AI Risks While Supporting Open-Weight Models for Innovation

The National Telecommunications and Information Administration (NTIA) has released a report supporting the widespread availability of powerful AI models, known as open-weight models, to promote innovation and accessibility. However, the report also calls for active monitoring of potential risks and outlines steps for collecting evidence, evaluating it, and taking action if necessary. Key recommendations: The report recommends that the U.S. government refrain from restricting the availability of open model weights for currently available systems while actively monitoring for potential risks: The government should develop an ongoing program to collect evidence of risks and benefits, evaluate that evidence, and act on...

read
Aug 2, 2024

California AI Safety Receives Widespread Criticism from AI Community

A new bill authored by Sen. Scot Wiener is making its way through the California Legislature with the intent to prevent AI from causing catastrophic effects. The proposed legislation, Senate Bill 1047, requires developers to conduct safety testing prior to public deployment and for the same reason is drawing strong opposition from various stakeholders in the AI community. Key provisions of the bill: SB 1047 seeks to balance fostering AI innovation with managing associated risks: AI developers would be required to safely test advanced AI models before training or releasing them to the public. The state attorney general would have...

read
Jul 31, 2024

U.S. to Exempt Allies from Chip Restrictions

Key developments: The Biden administration is reportedly planning to exempt allies like Japan and the Netherlands from forthcoming restrictions on advanced semiconductor technology exports, leading to a rally in chip stocks worldwide: Reuters reported that the U.S. will exempt key allies from the planned trade restrictions, which aim to limit China's access to advanced semiconductor manufacturing equipment. The news led to significant gains for major chip companies in Japan, the Netherlands, and South Korea, with shares of Dutch semiconductor equipment maker ASML climbing as much as 11% in Amsterdam. Industry impact: U.S. chip giants Nvidia and AMD also saw substantial...

read
Jul 28, 2024

Weekly Recap: Open-Source AI Models, UBI and Domestic Policy

The open-source AI revolution gains momentum as Meta's Llama 3.1 narrows the gap with proprietary models, signaling a strategic shift towards commoditizing foundational AI technologies to capture value in adjacent markets. Open-source AI progress and challenges: Meta's release of Llama 3.1 marks a significant milestone in open-source AI, matching the performance of closed-source models on key benchmarks and prompting questions about the future of AI development: Meta's "commoditize your complement" approach aims to make large language models (LLMs) more generic and accessible, potentially reducing the market value of core AI models while positioning Meta to profit from the surrounding ecosystem....

read
Jul 27, 2024

NIST Releases Guidance to Improve AI Safety, Security, and Trust

The U.S. Department of Commerce announced new guidance and software tools from the National Institute of Standards and Technology (NIST) to help improve the safety, security and trustworthiness of artificial intelligence (AI) systems, marking 270 days since President Biden's Executive Order on AI. Key NIST releases: NIST released three final guidance documents previously released in draft form for public comment in April, as well as two new products appearing for the first time: A draft guidance document from the U.S. AI Safety Institute intended to help mitigate risks stemming from generative AI and dual-use foundation models A software package called...

read
Jul 27, 2024

Ars Technica to Host AI and Infrastructure Events in San Jose and DC

Ars Technica is hosting two events this fall focused on the future of AI and data infrastructure, providing opportunities for attendees to engage with experts and network with fellow tech enthusiasts. September event in San Jose: The first event, titled "Beyond the Buzz: An Infrastructure Future with GenAI and What Comes Next," will take place on September 18 at the Computer History Museum in San Jose, California, exploring the implications of generative AI for data management: Topics will include addressing diverse workloads, identifying vulnerabilities using AI tools, and navigating the environmental impacts and responsibilities of infrastructure. The event will feature...

read
Jul 27, 2024

Apple Joins Tech Giants in White House’s AI Safety Pledge

Apple signs onto The White House's AI commitments, joining other major tech companies in promoting safe and responsible AI development. Key players and commitments: Apple is the latest company to sign The White House's voluntary AI agreement, which outlines principles for the safe and responsible development of artificial intelligence: The agreement, released last year, has already been signed by major tech companies such as OpenAI, Amazon, Google, Microsoft, Meta, Adobe, and Nvidia. By signing the agreement, Apple commits to promoting the responsible development and deployment of AI technologies, addressing potential risks and ethical concerns. Broader context and implications: The White...

read
Jul 27, 2024

White House Advances AI Leadership, Safety, and Innovation with New Actions and Commitments

The Biden-Harris administration has announced new actions and received an additional major voluntary commitment on artificial intelligence (AI), building on the landmark Executive Order issued by President Biden nine months ago to ensure America's leadership in managing the opportunities and risks of AI. Key developments: Apple has signed onto the voluntary AI commitments made by 15 leading U.S. AI companies last year, further cementing these commitments as cornerstones of responsible AI innovation. Federal agencies reported completing all 270-day actions in the AI Executive Order on schedule, making progress on critical areas such as managing AI's safety and security risks, protecting...

read
Jul 26, 2024

Existential Risk from AI Too Unreliable to Base Policy Decisions on, Some Experts Say

In a recent blog post, Arvind Narayanan and Sayash Kapoor argue that forecasts of existential risk from AI are based on speculation and pseudo-quantification rather than sound evidence or methodology. Key issues with AI existential risk forecasting: The article identifies several reasons why current AI existential risk probability estimates are unreliable and unsuitable for guiding policy: Inductive probability estimation is unreliable due to the lack of a suitable reference class, as an AI-driven human extinction event would be unprecedented and dissimilar to any past events. Deductive probability estimation is unreliable due to the lack of a well-established theory or model...

read
Jul 25, 2024

New Jersey Gambles $500M to be the Next AI Epicenter

New Jersey is enacting a hefty new tax credit program to attract AI companies, aiming to establish itself as a hub for AI innovation. However, the true economic impact and job creation potential remain uncertain. Key details of the AI tax credit program: New Jersey's governor signed a law offering up to $500 million in tax credits for AI companies and data centers that operate at large scales in the state: AI companies and data centers can qualify for the credits by diverting unspent funds from two other state tax credit programs enacted in response to the Covid-19 pandemic. To...

read
Jul 25, 2024

OpenAI CEO Pens Rare Op-Ed: U.S. Must Lead AI Development to Counter Authoritarian Control

In an op-ed published in The Washington Post, Sam Altman, CEO of OpenAI, has called for U.S. leadership in AI development to ensure a democratic vision for the technology and counter authoritarian efforts to control it. Key points from Altman's op-ed: Altman argues that the U.S. faces a strategic choice between advancing a global AI that spreads the technology's benefits and opens access to it, or allowing authoritarian nations to use AI to cement and expand their power. While the U.S. is currently ahead in AI development, Altman warns that this leadership is not guaranteed, urging politicians to take a...

read
Jul 25, 2024

Las Vegas Pioneers AI Weapon Detection on Buses, Tackling Rising Transit Violence Nationwide

The Las Vegas transit system plans to implement a pioneering AI-powered weapons detection system across its entire fleet of over 400 buses, making it the first in the nation to do so at such a scale. Key Takeaways: The Regional Transportation Commission of Southern Nevada is investing $33 million in a multi-year security upgrade that includes an AI-based gun detection software from ZeroEyes: The system scans riders to identify anyone brandishing a firearm in a threatening manner, aiming to give authorities a critical time advantage in responding to potential active shooter situations. When the AI detects a brandished gun, it...

read
Jul 19, 2024

California AI Bill Sparks Debate and Industry Pushback

California's landmark AI safety bill sparks debate and industry pushback Key points and reactions: The introduction of California's SB 1047 bill, which requires safety testing and shutdown capabilities for large AI models, has generated strong reactions and debates: The bill passed the state senate with bipartisan support (32-1) and has 77% public approval in California according to polls, but has faced fierce opposition from the tech industry, particularly in Silicon Valley. Tech heavyweights like Andreessen-Horowitz and Y Combinator have publicly condemned the bill, arguing it will stifle innovation and push companies out of California. However, the bill's author Sen. Scott...

read
Jul 18, 2024

Trump’s AI Executive Order Draft Aims To Boost Military Tech and Cut Regulations

Trump allies draft sweeping AI executive order aimed at boosting military technology and reducing regulations, signaling a potential shift in AI policy if Trump returns to the White House in 2025. Key elements of the draft order: The plan, titled "Make America First in AI," outlines a series of "Manhattan Projects" to advance military AI capabilities and calls for an immediate review of what it terms "unnecessary and burdensome regulations" on AI development: The approach contrasts with the Biden administration's executive order from last October, which imposed new safety testing requirements on advanced AI systems. The proposed order suggests creating...

read
Jul 16, 2024

Hong Kong Develops ChatGPT-Style AI Tool Amid OpenAI Access Restrictions

Hong Kong is testing its own ChatGPT-style AI tool for government employees, with plans to eventually make it available to the public, after OpenAI took steps to block access to its services from the city and other unsupported regions. Key developments in Hong Kong's AI efforts: The local government is actively pursuing the development of its own generative AI model, which could have significant implications for the city's tech landscape and its relationship with major AI companies: The Hong Kong government's innovation bureau is currently testing a ChatGPT-style tool called "document assistance application for civil servants," developed by a local...

read
Jul 15, 2024

White House Adviser: AI Promises Progress But Requires Regulation

The time for regulating AI is now, according to Biden's top tech adviser. Key takeaways: Arati Prabhakar, director of the White House's Office of Science and Technology Policy (OSTP), views AI as a pressing issue with both promising and concerning implications: As the president's chief science and tech adviser, Prabhakar is helping guide the White House's approach to AI safety and regulation, including Biden's executive order from last fall. While excited about AI's potential to accelerate progress in areas like health, climate, and public missions, Prabhakar stresses the need to manage AI's risks in order to harness its benefits. She...

read
Jul 11, 2024

Utah Launches AI Policy Office to Shape Regulations and Mitigate Risks

Utah has launched its Office of Artificial Intelligence Policy to shape AI regulations and explore the most effective methods of mitigating risks while fostering responsible development. Key focus areas: The Office of Artificial Intelligence Policy (OAIP) will study AI-related issues and collaborate with industry experts, academics, and regulators to find and implement best practices in AI regulation: The OAIP's first focus area will be the use of AI in healthcare, with a specific emphasis on mental health applications. Businesses, academic institutions, and other subject matter experts are invited to participate in this initiative. The office is also seeking public input...

read
Load More