Follow the latest blogs, news articles, and press releases on Preamble.
Preamble will be one of more than 200 leading AI stakeholders to help advance the development and deployment of safe, trustworthy AI under new U.S. Government safety institute
Rising out of the chaos this weekend, the urgency of deploying AI safely, securely, and ethically may finally get the attention it deserves.
Bunker Labs and JPMorgan Chase Commercial Banking's CEOcircle Program accelerates business growth for veteran and military-spouse owners
We needed a reminder on these principles of robot and AI learning: some of the big problems in next-gen builds will probably relate to the idea of poorly targeted incentives, as represented by Preamble's Chief Research Research Officer, Dylan Hadfield-Menell’s story about a video game boat that just spins around in circles on the board, instead of actually playing the game the way it’s supposed to.
Preamble, Inc., a technology startup company pioneering SaaS-based safety and ethical values systems for AI platforms, announced it has joined NVIDIA Inception, a program that nurtures startups revolutionizing industries with technological advancements.
Interview with Preamble CEO on creating guardrails for AI systems and a brief history of the company.
High-tech startup leaders from the Rust Belt and beyond presented at the close of Pittsburgh's inaugural Xchange Innovation Week.
Developers are racing to patch vulnerabilities used to make generative AI systems shed their safety restrictions. Time isn’t on their side.
As AI capabilities grow, so do the risks associated with its development and deployment. While addressing issues such as safety and security in AI is important, people also want AI that reflects their specific values and objectives. Preamble's platform offers tools for defining what matters to individuals and businesses when it comes to AI, allowing them to create AI policies that reflect their unique needs and values.
Security researchers are jailbreaking large language models to get around safety rules. Things could get much worse.
A startup that is working to develop products that will place safety guardrails on popular artificial intelligence platforms has established Pittsburgh as its official headquarters as it actively looks to recruit local talent to its growing team.
As AI becomes more pervasive in our daily lives, it's critical to address the potential vulnerabilities that arise with these technologies. That's where responsible disclosure practices come in. In this blog post, we'll dive into what responsible disclosure is, why it's crucial for mitigating AI risks, and how researchers and developers can implement responsible disclosure practices to protect users and promote the responsible use of technology.
As the new year begins, we want to address the trending item in AI, ChatGPT.
The Preamble team made a responsible disclosure of the first known instance of an injection vulnerability of GPT-3 in May 2022.
By telling AI bot to ignore its previous instructions, vulnerabilities emerge.
The Preamble team was in Cannes April 14-16th for the first annual World Artificial Intelligence Cannes Festival (WAICF).
A new bill was introduced to reduce and prevent online harm to children.
Internet safety for children is a significant concern at Preamble. We are doing our part to mitigate these online harms.
Read an interview about women leading the AI industry from our Lead Engineer, Leyla Hujer.
The Preamble team was in Amsterdam for the 15th ACM RecSys Conference.
On October 5, 2021, testimony from Facebook whistleblower Frances Haugen was presented to a Senate Subcommittee on Consumer Protection, Product Safety, and Data Security. Read our summary and how Preamble is working to mitigate these issues in the future.
The authors describe why middleware would be a good solution for internet platforms to adopt where users would have more control over the content that they see. They also describe how a middleware service protects the user's First Amendment Rights.
Our Chief Research Officer's prior research discusses how humans have a notoriously difficult time specifying what we actually want, and the AI systems we build suffer from it.