By investing in research and development, addressing ethical concerns, and fostering dialogue and collaboration, we can ensure that the benefits of AI are realized while minimizing the potential risks.
As AI becomes more pervasive in our daily lives, it's critical to address the potential vulnerabilities that arise with these technologies. That's where responsible disclosure practices come in. In this blog post, we'll dive into what responsible disclosure is, why it's crucial for mitigating AI risks, and how researchers and developers can implement responsible disclosure practices to protect users and promote the responsible use of technology.
As the new year begins, we want to address the trending item in AI, ChatGPT.
Tech blogger Simon Willison has written a series about the Prompt Injection against GPT-3
Preamble discovered a critical safety issue in large language models
The Preamble team was in Cannes April 14-16th for the first annual World
A new bill was introduced to reduce and prevent online harm to children.
Internet safety for children is a significant concern at Preamble. We are doing our
Read an interview about women leading the AI industry from our Lead Engineer, Leyla Hujer.
The Preamble team was in Amsterdam for the 15th ACM RecSys Conference.
On October 5, 2021, testimony from Facebook whistleblower Frances Haugen was presented to a Senate Subcommittee on Consumer Protection, Product Safety, and Data Security. Read our summary and how Preamble is working to mitigate these issues in the future.
The authors describe why middleware would be a good solution for internet platforms to adopt where users would have more control over the content that they see. They also describe how a middleware service protects the user's First Amendment Rights.
We, humans, have a notoriously difficult time specifying what we actually want
Online platforms currently use various content moderations solutions to remove