Social media platforms design their algorithms to show people what they engage with. When this works well, it is great. Unfortunately, when it doesn’t work, these systems can take advantage of some of our worst instincts to keep us glued to our screens: feeding people increasingly hateful versions of what they already hate or more fearful versions of what they already fear. In order to address this social dilemma, Preamble has developed a simple approach to put people back in the driver’s seat and let them leverage the power of AI in the way that suits them best.
The problem, at its core, is that machine learning is inherently blind to the values that drive us. We believe that the secret to building a better digital world is transparency and inclusion. At Preamble, we are working with non-profits and external subject matter experts to define values for AI systems. Our mission is to ensure that AI supports and nurtures our better selves, helping us change the world and the society for the better.
In order to maintain an all-inclusive algorithm, we need you to help our systems determine which content is favorable and which isn’t by assigning a rank to a specific piece of content. As our partners, you will not only be entitled to monetary compensation for your efforts, but this will also be an opportunity for your organization to promote the values it strives for.