Eventbrite

Help Center

How Eventbrite Personalizes and Moderates Content

Last updated: February 17, 2024. Eventbrite’s mission is to bring people together through live experiences. We are deeply committed to creating a marketplace that creators and consumers trust to facilitate safe, inclusive experiences. In this article, we share more about how we keep our online marketplace safe, including how we personalize and moderate content on our platform.

In this article

  • How We Personalize Content
  • How We Moderate Content
  • How We Detect Content Violations
  • How We Review Content

How We Personalize Content

To help you discover the most relevant experiences on the Eventbrite marketplace, we use the data that we have about you to prioritize the events we surface to you. Some of the signals we use to personalize your experience in viewing event listings and ads include content you have viewed, clicked, liked, followed or searched and information you provide when you respond to one of our surveys or register or purchase tickets to an event. You can learn more about how we use and protect your data in our Privacy Policy

How We Moderate Content

Content moderation is the process by which a platform reviews and takes action on content that is illegal, inappropriate, or harmful. Our Community Guidelines provide transparency into how we keep our community safe. They also serve as the guardrails for what types of content are encouraged and what types of content threaten the integrity of our platform and are not allowed. To combat platform abuse, we rely on a combination of tools and processes, including proactive detection via machine learning technology and rules-based systems, reactive detection via reports from our community, and human reviews.

How We Detect Content Violations

Proactive Detection

We detect a significant portion of content violations proactively via two primary methods: (1) rules-based systems (i.e., hand-crafted rules to identify content that meets specific criteria), and (2) machine learning technology (i.e., a statistical process in which we train a model to recognize certain types of patterns based on previous examples and make predictions about new data). 

Rules-based systems are effective when there are a few simple criteria indicating elevated risk. Machine learning technology leverages multiple signals and is thus more effective when there are complex patterns indicating elevated risk. Both detection strategies are validated by offline analysis and monitored for continuous improvement opportunities.

Reactive Detection

In addition to the proactive methods described above, our community plays an important role in reporting content they feel is harmful, either by Contacting Us or using the Report This Event link that exists in each event’s footer. Although we aspire to proactively remove harmful content before anyone sees it, this open line of communication with our community is a crucial part of our content moderation program. 

How We Review Content

Although automation is critical to our ability to scale our content moderation program, there are still areas where human review is, and may always be, required. For example, discerning if an individual is the target of bullying can be nuanced and context-based. 

When our team reviews content, here’s what happens:

  • Review: Our team reviews the content and makes a decision about whether it violates our policies or is illegal. Depending on the details of the case, our team may review only the reported content or the account more holistically. 

  • Get Information (If Necessary): If the team is unable to determine whether the content violates our policies, they may seek additional information from relevant parties (e.g. the account holder or the individual who reported the content).

  • Action: If the content is determined to violate our policies, the team will take appropriate action, including delisting content from search results, removing content from event listings, unpublishing event listings from the marketplace, and/or suspending or terminating account access.

  • Communicate: Once resolved, the team will inform the relevant parties of any actions taken and why. Impacted users can reply to this communication to seek an appeal of the decision. 

Our priority is to remove harmful content quickly and with the least disruption to our community. By using a combination of proactive and reactive detection, and layering it with human review of escalated content where warranted, we can quickly detect and appropriately action content.

As the world around us evolves, so must the ways in which we moderate content. We are constantly updating our program in anticipation of, or in response to new behaviors, trends, and perceived threats to the integrity of the Eventbrite marketplace.

Still have questions?