Evidence Rating: Effective | More than one study
Date:
This intervention displayed online warning messages to Internet users attempting to view barely legal pornography or upload sexual photos of women, to reduce the accessing of child sexual exploitation material online. The program is rated Effective. Individuals in the experimental groups who received the online warning messages were statistically significantly less likely to attempt to enter the websites, compared with individuals who did not receive a warning message.
An Effective rating implies that implementing the program is likely to result in the intended outcome(s).
This program's rating is based on evidence that includes at least one high-quality randomized controlled trial.
Program Goals
Child sexual exploitation material is broadly defined as any material involving sexualized content of children (Prichard and Spiranovic 2014). The Internet can provide an environment where this material can be accessed immediately and cheaply, with apparent anonymity and low risk of detection, from an individual’s home (Merdian, Wilson, and Boer 2009; Quayle 2012; Wortley and Smallbone 2006; Wortley and Smallbone 2012).
While many jurisdictions have specialized law enforcement agencies to address individuals who engage with this kind of material online, prevention strategies may provide an alternative response. Online warning messages have been recommended as one strategy to reduce the viewing of child sexual exploitation material (Quayle and Koukopoulos 2019; Williams 2005; Wortley and Smallbone 2012), especially by individuals who are just beginning to seek out this material (Taylor and Quayle 2008). The goal of online warning messages is to deter Internet users from accessing child sexual exploitation material.
Program Components
Child sexual exploitation material online warning messages have been recommended as a prevention strategy because they are consistent with health prevention models (Quayle and Koukopoulos 2019) and can be implemented by any agency with the capacity to inject code into web pages to trigger actions defined in the page’s java script (Prichard et al. 2019).
Online warning messages display clear and concise information about the hazards of entering a site or accessing certain material and the behavior needed to avoid them (Laughery and Page–Smith 2006; Lenorovitz, Leonard, and Karnes 2012). Messages should be believable (Riley 2006), come from a credible source (Wogalter and Mayhorn 2008), and attract and maintain the attention of viewers through the use of signal words such as “warning” (Wogalter, Jarrard, and Simpson 1992), alert icons or symbols like “!” (Ng and Chan 2009), and colors (Silic and Cyr 2016). Many approaches can be used to deliver these warning messages to users. For instance, messages can be activated when certain search terms are entered, or when users attempt to access a URL known to contain child sexual exploitation material.
In this intervention that took place in Australia, a honeypot website called “GetFit” was built to display these types of online warning messages regarding material closely related to child sexual exploitation. Honeypots are Internet sites that mimic likely targets for online attacks or other deviant contact and are thus used as “bait” to detect, analyze, and/or counter such unwanted activity. GetFit was a fully functional bodybuilding website that contained articles on muscle development, sport, diet, and some articles on sex. It also contained fake advertisements to websites purporting to have “barely legal” pornography, or free pornography for users who uploaded their own sexual images of women. When users clicked on these advertisements, a warning message was triggered that began with the language “We thought you’d like to know this website shows females who are just above legal age, but may look younger,” followed by the specific deterrent language “Viewing this material may be illegal in some countries and lead to arrest.” Another online warning message on the fake advertisement read: “It’s a crime to share sexual images of people who look under 18. Visit esafety.gov.au to find out more”—with text indicating the message was endorsed by the Australian Office of the e-Safety Commissioner. The visual design of the messages incorporated alert symbols (“!”) and a signal word (“warning”). The messages were interstitial banners, meaning they covered users’ entire screen regardless of device type. Users had to get rid of the message by clicking “exit,” “enter,” or a navigation function (e.g., closing their browser).
Program Theory
The use of online warning messages as a child sexual exploitation material prevention tool is based on the theory of situational crime prevention (Wortley 2012; Wortley and Smallbone 2006; Wortley and Smallbone 2012). The situational perspective posits that all crime is the result of a person–situation interaction, and that given the right circumstances, even normally law-abiding individuals may offend (Clarke 2017; Mayhew et al. 1975). As mentioned above, the Internet can provide an easy and anonymous way for individuals to access child sexual exploitation material. Therefore, warning messages are a strategy to make the Internet a less conducive environment for an individual to offend in (Wortley 2012; Wortley and Smallbone 2006; Wortley and Smallbone 2012) and clearly display the risks of accessing child sexual exploitation material by underscoring the chances and consequences of detection (Wortley and Smallbone 2012).
Study 1
Desistance Rates
Prichard and colleagues (2021) found that individuals who received the second deterrent warning message (“Viewing this material may be illegal in some countries and lead to arrest”) had a higher rate of desistance (i.e., not clicking “enter” on the Just Barely Legal site) than individuals in the control group who did not receive a deterrent warning message. Forty-eight percent of participants who received the second deterrent warning message clicked “enter” on the Just Barely Legal site, compared with 73 percent of the control group. This difference was statistically significant.
Study 2
Desistance Rates
Prichard and colleagues (2022) found that participants who received the Message A static text only online warning message (“It’s a crime to share sexual images of people who look under 18. Visit esafety.gov.au to find out more.”) had a higher rate of desistance (i.e., not clicking “enter” on the Swap My Babe landing page) than individuals in the control group who did not receive a warning message. Forty-three percent of participants who received Message A clicked “enter” on the website landing page, compared with 60 percent of participants in the control group. This difference was statistically significant.
Study
Prichard and colleagues (2022) conducted a double-blind randomized controlled trial to examine the effectiveness of online warning messages on desistance to a website advertising free pornography for users who uploaded their own sexual images of a woman (referred to as “Swap My Babe”). Participants who visited the honeypot GetFit website (as described in the Program Description and Study 1, Prichard et al. 2021) between Aug. 26, 2019, and March 30, 2020, were included. Similar to Study 1, social media advertising was used to attract English-speaking Australian men ages 18–22 to the GetFit website, although Internet users also could come across the site through web browsing.
GetFit users who clicked on the advertisement for the fake Swap My Babe website were randomly allocated to control or experimental conditions. The control group did not receive an online message and proceeded directly to the fake landing page of the Swap My Babe website that provided users with the option of “exiting” (navigating to the previous GetFit page) or “entering,” which triggered a message after a 5-second delay from the website: “Sorry! We’re undergoing routine maintenance. Please check back shortly.” Experimental groups were presented with one of two messages, and the text presented in both messages stated: “It’s a crime to share sexual images of people who look under 18. Visit esafety.gov.au to find out more.” Message A contained only static text, and Message B combined the text with a 9-second animation depicting a male character uploading a sexual image online and then being arrested. Both messages indicated that the text was endorsed by the Office of the e-Safety Commissioner and contained an alert symbol (“!”) and a signal word (“warning”). To remove the message, the participant had to interact with it (“click ‘exit’”). The CrimeSolutions review of this study focused on the results of the Message A experimental group, compared with the control group. After randomization, there were 102 participants in the control group, and 177 in the Message A experimental group.
Participants’ attempts to enter the Swap My Babe website landing page were measured to create a dichotomous dependent variable, desistance, which referred to the proportion of participants who did not click “enter” at the Swap My Babe landing page. Google Analytics provided metrics about the number of visitors to GetFit, their pathway to the site, and their behavior at the site. Repetitions of IP addresses were deleted through manual checking and records identified as bots were also excluded. The IP addresses of the participants who clicked on the Swap My Babe advertisement were gathered. No other information was gathered about the participants.
Fisher’s exact test (one-sided) was applied to determine the statistical significance of differences in the observed proportion of users from both the Message A experimental group and the control group who did not click “enter” (desistance) on the Swap My Babe landing page. No subgroup analysis was conducted.
Study
Prichard and colleagues (2021) conducted a randomized controlled trial to examine the effect of online warning messages on individuals’ desistance rates to a “barely legal” honeypot website. “Barely legal” is a form of pornography that research has suggested is a pathway to child sexual exploitation material, and was therefore used as a proxy for child sexual exploitation material in this study for legal and ethical reasons. The GetFit website (as described in the Program Description) was launched on April 6, 2017; the experimental period was Nov. 27, 2017, through April 2, 2019. Social media advertising for GetFit was used to attract English-speaking Australian males ages 18 to 30 to the website. Users were not aware of their involvement in the study and did not provide consent. No identifying information was gathered about the participants other than IP addresses.
GetFit contained fake advertisements for a pornography site, “Just Barely Legal.” The advertisements contained legally purchased nonpornographic images of models certified to be adults, accompanied by the text “Just Barely Legal” and an “enter” button. Participants who chose to click on the Just Barely Legal advertisement were randomly allocated to a control group or one of four experimental groups who received a message (one of two harm messages [H1 and H2], or one of two deterrent messages [D1 and D2]; the four messages are described below). The control group was taken straight to the Just Barely Legal landing page. Experimental groups were presented with a message relating to the risks of harm (to themselves or the “barely legal” actresses) and police activity (surveillance or arrest). To comply with ethical requirements, all messages began with the same language: “We thought you’d like to know this website shows females who are just above legal age, but may look younger.” Participants in the first harm message experimental group received the warning “Health professionals believe this material may lead users to become sexually aroused by children” (H1), and participants in the second harm message experimental group received the warning “Health professionals believe the individuals shown may experience long-term feelings of distress” (H2). Participants in the first deterrent message experimental group received the warning “Police may obtain IP addresses to track users” (D1); participants in the second deterrent message experimental group received the warning “Viewing this material may be illegal in some countries and lead to arrest” (D2). The CrimeSolutions review of this study focused on results for the D2 message experimental group, compared with the control group. After randomization, 99 participants received the D2 desistance warning message, and 100 participants did not receive any message (the control group).
The visual design of each of the warning messages incorporated an alert symbol (“!”) and a signal word (“warning”). Users had to get rid of the message by clicking “exit,” “enter,” or a navigation function (e.g., closing their browser). “Enter” would direct users to the Just Barely Legal landing page. The fake entrance page of Just Barely Legal mimicked the layout and functionality of other pornography sites, providing users with the option of “exiting” (navigating to the previous GetFit page) or “entering,” which triggered a message after a 5-second delay from the Just Barely Legal site: “Sorry! We’re undergoing routine maintenance. Please check back shortly.” Control participants were directed to the Just Barely Legal site without a message and had the same “enter/exit” option.
Google Analytics were used to provide metrics about the number of visitors to GetFit, their pathway to the site, and their behavior at the site. Manual checking was used to delete repetitions of IP addresses, and any records identified as bots were excluded. Whether the participants attempted to “enter” the Just Barely Legal website once they arrived at the landing page was measured and used to create a dichotomous dependent variable, desistance. Four analyses were conducted, separately comparing each of the four experimental groups with the controls. Fisher’s exact test (one-sided) was applied to determine the statistical significance of the observed proportion of users from each group who did not click “enter” (desistance) on the Just Barely Legal landing page. No subgroup analysis was conducted.
In addition to law enforcement agencies and big-tech companies, online warning messages may be implemented by a range of agencies at the local level, such as schools, public libraries, fast-food outlets, and airports (Hunn et al. forthcoming). Honeypots have been used by researchers for many years, but mainly to study cyber-security. The deployment of a honeypot to examine strategies to prevent child sexual exploitation material is new (Scanlan et al. 2022), but indicates that honeypots might be implemented to study other online phenomena.
These sources were used in the development of the program profile:
Study
Prichard, Jeremy, Joel Scanlan, Tony Krone, Caroline Spiranovic, Paul A. Watters, and Richard Wortley. 2022. Warning Messages to Prevent Illegal Sharing of Sexual Images: Results of a Randomized Controlled Experiment. Trends & Issues in Crime and Criminal Justice 647:1–14. Canberra, Australia: Australian Institute of Criminology.
Prichard, Jeremy, Richard Wortley, Paul A. Watters, Caroline Spiranovic, Charlotte Hunn, and Tony Krone. 2021. “Effects of Automated Messages on Internet Users Attempting to Access ‘Barely Legal’ Pornography.’” Sexual Abuse 00(0):1–19.
These sources were used in the development of the program profile:
Clarke, Ronald V.G. 2017. Situational Crime Prevention. In Richard Wortley and Michael Townsley (eds.). Environmental Criminology and Crime Analysis (second ed.). Milton Park, England: Routledge, 286–303.
Hunn, Charlotte, Paul A. Watters, Jeremy Prichard, Richard Wortley, Joel Scanlan, Caroline Spiranovic, and Tony Krone. Forthcoming. Implementing Online Warnings to Prevent CSAM Use: A Technical Overview. Trends & Issues in Crime and Criminal Justice. Canberra, Australia: Australian Institute of Criminology.
Laughery, Kenneth R., and K. Page–Smith. 2006. Explicit Information in Warnings. In Michael S. Wogalter (ed.), Handbook of Warnings. Lawrence Erlbaum Associates, 419–28.
Lenorovitz, David R., S. David Leonard, and Edward W. Karnes. 2012. “Ratings Checklist for Warnings: A Prototype Tool to Aid Experts in the Adequacy Evaluation of Proposed or Existing Warnings.” Work 41(Suppl. 1):3616–23.
Mayhew, Patricia, Ronald V.G. Clarke, A. Sturman, and Mike Hough. 1975. Crime as Opportunity. London, England: Home Office Research and Planning Unit.
Merdian, Hannah Lena, Nick Wilson, and Douglas P. Boer. 2009. “Characteristics of Internet Sexual Offenders: A Review.” Sexual Abuse in Australia and New Zealand 2(1):34–45.
Ng, Annie W.Y., and Alan H.S. Chan. 2009. “What Makes an Icon Effective?” American Institute of Physics (AIP) Conference Proceedings, Hong Kong, China, Jan. 30.
Prichard, Jeremy, Tony Krone, Caroline Spiranovic, and Paul A. Watters. 2019. Transdisciplinary Research in Virtual Space: Can Online Warning Messages Reduce Engagement With Child Exploitation Material? In Richard Wortley, A. Sidebottom, N. Tilley, and G. Laycock (eds.). Routledge Handbook of Crime Science. Milton Park, England: Routledge, 309–19.
Prichard, Jeremy, and Caroline Spiranovic. 2014. Child Exploitation Material in the Context of Institutional Child Sexual Abuse. Sydney, Australia: Royal Commission Into Institutional Responses to Child Sexual Abuse.
Quayle, Ethel. 2012. “Organizational Issues and New Technologies.” In Marcus Erooga (ed.). Creating Safer Organizations: Practical Steps to Prevent the Abuse of Children by Those Working With Them. Hoboken, N.J.: Wiley–Blackwell, 99–121.
Quayle, Ethel, and Nikolaos Koukopoulos. 2019. “Deterrence of Online Child Sexual Abuse and Exploitation.” Policing 13(3):345–62.
Riley, D.M. 2006. Beliefs, Attitudes, and Motivation. In Michael S. Wogalter (ed.). Handbook of Warnings. Mahwah, N.J.: Lawrence Erlbaum Associates, 289–300.
Scanlan, Joel, Paul A. Watters, Jeremy Prichard, Charlotte Hunn, Caroline Spiranovic, and Richard Wortley. 2022. “Creating Honeypots to Prevent Online Child Exploitation.” Future Internet 14(4): 1—14.
Silic, Mario, and Diane Cyr. 2016. Colour Arousal Effect on Users’ Decisionmaking Processes in the Warning Message Context. Toronto, Ontario: Third International Conference on HCI in Business, Government, and Organizations.
Taylor, Max, and Ethel Quayle. 2008. “Criminogenic Qualities of the Internet in the Collection and Distribution of Abuse Images of Children.” Irish Journal of Psychology 29(1–2):119–30.
Williams, Katherine S. 2005. “Facilitating Safer Choices: Use of Warnings to Dissuade Viewing of Pornography on the Internet.” Child Abuse Review 14(6):415–29.
Wogalter, Michael S., Stephen W. Jarrard, and S. Noel Simpson. 1992. “Effects of Warning Signal Words on Consumer-Product Hazard Perceptions.” Proceedings of the Human Factors and Ergonomics Society Annual Meeting 36(13):935–39.
Wogalter, Michael S., and Christopher B. Mayhorn. 2008. “Trusting the Internet: Cues Affecting Perceived Credibility.” International Journal of Technology and Human Interaction 4(1):75–93.
Wortley, Richard. 2012. Situational Prevention of Child Abuse in the New Technologies. In Kurt M. Ribisl and Ethel Quayle (eds.). Preventing Online Exploitation of Children. Milton Park, England: Routledge, 1—28.
Wortley, Richard, and Stephen Smallbone. 2006. Child Pornography on the Internet: Problem-Oriented Guides for Police Series. Washington, D.C.: U.S. Department of Justice.
https://popcenter.asu.edu/content/child-pornography-internet-0Wortley, Richard, and Stephen Smallbone. 2012. Internet Child Pornography: Causes, Investigation, and Prevention. Westport, Conn.: ABC–CLIO.
Age: 18 - 30
Gender: Male
Setting (Delivery): Other Community Setting
Program Type: Commercial Sexual Exploitation/Human Trafficking Prevention/Intervention, Crime Prevention Through Environmental Design/Design Against Crime, Situational Crime Prevention, Specific deterrence
Current Program Status: Not Active