Our organization is dedicated to conducting research that addresses the ethical, safety, and security concerns surrounding AI. We invite you to join us in our efforts to maximize the benefits of AI while minimizing its potential risks.
Preamble will be one of more than 200 leading AI stakeholders to help advance the development and deployment of safe, trustworthy AI under new U.S. Government safety institute
As AI capabilities grow, so do the risks associated with its development and deployment. While addressing issues such as safety and security in AI is important, people also want AI that reflects their specific values and objectives. Preamble's platform offers tools for defining what matters to individuals and businesses when it comes to AI, allowing them to create AI policies that reflect their unique needs and values.
As AI becomes more pervasive in our daily lives, it's critical to address the potential vulnerabilities that arise with these technologies. That's where responsible disclosure practices come in. In this blog post, we'll dive into what responsible disclosure is, why it's crucial for mitigating AI risks, and how researchers and developers can implement responsible disclosure practices to protect users and promote the responsible use of technology.
As the new year begins, we want to address the trending item in AI, ChatGPT.
The Preamble team was in Cannes April 14-16th for the first annual World
A new bill was introduced to reduce and prevent online harm to children.
Internet safety for children is a significant concern at Preamble. We are doing our
Read an interview about women leading the AI industry from our Lead Engineer, Leyla Hujer.
The Preamble team was in Amsterdam for the 15th ACM RecSys Conference.
On October 5, 2021, testimony from Facebook whistleblower Frances Haugen was presented to a Senate Subcommittee on Consumer Protection, Product Safety, and Data Security. Read our summary and how Preamble is working to mitigate these issues in the future.
The authors describe why middleware would be a good solution for internet platforms to adopt where users would have more control over the content that they see. They also describe how a middleware service protects the user's First Amendment Rights.
We, humans, have a notoriously difficult time specifying what we actually want
Online platforms currently use various content moderations solutions to remove