Facebook News Today, but What do we do Tomorrow?

On October 5, 2021, testimony from Facebook whistleblower Frances Haugen was presented to a Senate Subcommittee on Consumer Protection, Product Safety, and Data Security. The main focus was on child safety and discussing the shortcomings of current governance for online social media platforms.

Some key takeaways we want to highlight for the public are the issues surrounding user choice and engagement-based ranking. To be clear, though the focus has been on Facebook’s news feed most prominently, machine learning AI systems are integrated in social media platforms of all sizes to help “improve” content recommendations according to opaque methods and goals set by the platforms. These platforms have a lot of responsibility to review and manage massive amounts of user generated content. Practically, it is not feasible for a company to have a human analyze every piece of content,  and there is a genuine need for reliable machine learning solutions to be involved in  content review and content recommendation. An important issue that Frances Haugen has highlighted, as other experts have done as well, is the lack of transparency and user control of the algorithms that are used in this content curation process.

At Preamble, we believe users should have much greater control over the content that is recommended to them by these platforms. While we believe Section 230 is important for the protection of a free and open discourse on the internet, holding online platforms accountable for the content they recommend (but do not create) seems like a logical solution to increase the safety of these platforms.  If platforms use algorithms to recommend content, the content that is then recommended  should be classified as “published”  content, as it then becomes the platform which  is deciding to share that content to potentially billions of users. As revealed in the Facebook Files, the control of recommended user content gives those platforms the ability to shape an end user's online environment and real world behavior. This is not exclusively an effect from Facebook's news feed, but one that comes from the power of any content recommendation process (human or machine) that reaches as wide an audience as current social media platforms.  

We want to make sure that the disclosure of unsafe outcomes is addressed and we can improve future services going forward. Like Frances Haugen, we do not believe social media must create serious harm for society, and we believe it can be improved considerably. Preamble is the first company to create an ecosystem that will provide machine learning models that represent human values and ethics. These models can mediate and replace the models used by platforms to recommend content in a way that is determined by user preferences.  In doing so, we believe we can support online platforms with the issue of auditing algorithmic publishers’ content before it is presented to a user. We believe our solutions at Preamble will be useful in protecting users' safety as it puts the users' preferences in greater control over the content that they consume and see on social media.

If you want to learn more how Preamble can improve your content recommendation services, feel free to reach out to us at contact@preamble.com or visit us at https://www.preamble.com

Background shape

Get our updates.

We’ll notify our community of new libraries and AI tools released.

Start taking control of your enterprise AI systems today