Guidance for Research on College Student Support Interventions
Lessons Learned from a Study of Single Stop
Research SummaryPublished Jun 30, 2025
Lessons Learned from a Study of Single Stop
Research SummaryPublished Jun 30, 2025
This brief provides guidance to researchers on opportunities to strengthen implementation and student take-up in rigorous studies of college student support interventions. Researchers describe four lessons learned from a RAND study of a college support program. Some of the challenges researchers faced and lessons learned were unique to studies of campus-based college support programs. Other challenges and lessons learned may be more broadly relevant for studies in K–12 settings and for other types of educational interventions (e.g., curriculum changes).
RAND researchers conducted an experimental study of Single Stop, a technology-based intervention that leverages screening for public benefits and case management to support low-income college students (see box). Some of the study colleges struggled to implement the program with fidelity, and take-up of the public benefit's screener and case management services was low. The four lessons learned from the Single Stop study may be relevant for researchers to consider as they conduct studies on the impact of student support programs in college settings.
Many studies of efficacy — including this RAND study — focus on new sites to ensure that campuses are willing to stagger the rollout of the intervention randomly or otherwise. If colleges take time to build staff capacity to implement programs well and establish broad student awareness, efficacy studies that examine new programs may find lower levels of fidelity and take-up. Thus, such studies might underestimate the impacts of programs relative to what research might find for more-established programs.
Researchers can leverage study approaches that assess efficacy in more-established sites, but these approaches also have limitations. For example, prior research on Single Stop used a propensity score–matching approach that compared students who did use the program with students at the same college who did not use the program. This approach allows research of more-established sites but is limited in that it cannot account for unobservable differences in who chooses to use a well-established service. These unobservable differences can lead to biased estimates of the impacts of an intervention.
Randomized encouragement studies are another approach that have been used to study student supports in colleges. These study designs randomly promote student supports through email and text nudges on top of the broader outreach that colleges engage in to promote support services. Randomized encouragement designs may be most successful in cases in which few individuals are aware of the intervention because the additional informational nudges can drive more students to use services. Importantly, these designs capture efficacy if the nudges were a permanent implementation feature absent the study.
Researchers can leverage feasibility studies to prescreen sites and assess whether sites have sufficient guidance and resources to implement the intervention successfully. More-tailored site selection to test efficacy under ideal conditions may improve implementation and support higher rates of take-up. However, prescreening sites can also limit the generalizability of results to colleges that have different contexts.
If studies must focus on launching new programs, researchers should confirm that the developer's guidance and training are sufficient to support the immediate, successful launch of the program at new sites. Single Stop provided robust training and support for the use of its portal, but researchers identified gaps in the guidance provided to stand up a one-stop basic needs center and the staff capacity necessary to carry out this work. This is not unique to Single Stop; guidance from educational technology providers often focus on the basics of tool use and offer insufficient support on the human practices and capacity needed to ensure the success of the intervention.
Research suggests that building awareness of college basic needs supports can require a variety of broad, ongoing outreach efforts. Single Stop's national office promotes broad outreach and relationship-building as critical to driving take-up of the public benefits screener and case management services. Outreach through emails and text messages from case managers can be one valuable approach, but colleges frequently use other strategies, such as referrals from staff, faculty, and peers; information on course syllabi; fliers; and tables in common areas (e.g., food pantries). Given that college students may require trust and relationship-building to access basic needs supports that carry stigma, electronic communication encouraging students toward unknown programs and new program staff may be less effective. Study constraints around broader and personalized approaches to outreach may be particularly problematic for these interventions. Yet experimental designs — including this study — typically use controlled approaches to outreach that ensure only a randomized subset of students hear about a program. Student- or course-level randomization may limit the opportunities for staff to leverage collegewide marketing approaches, including many of those listed here.
Researchers should consider whether their study design shifts the way an intervention is delivered and the implications for student take-up and generalizability. Many student support interventions are delivered as campus-level interventions, in which deeply embedded services and broad outreach build awareness and take-up. Randomization at the campus level may be most appropriate for a campus-level intervention like Single Stop. But campus-level randomization is often impractical, and it is commonplace to study campus support services using within-campus randomization strategies. Promoting and providing interventions to students at the classroom level or student level are ways to support within-campus randomization. However, the estimates of impact from these studies will measure the impact of classroom-level or individual-level promotion and delivery of supports and may not be generalizable to the more-common, institutionwide approaches to delivery. Altering how services are delivered may also shift which students interact with the intervention and how they engage with the intervention. For example, required in-class Single Stop screenings may increase the overall number of students completing the screener, but providing institutionwide outreach on Single Stop and allowing students to self-identify may ensure that those receiving services are more likely to follow through and engage with case managers.
Researchers can consider alternative study designs that allow for broader approaches to outreach and opportunities for relationship-building. For example, randomizing and delivering Single Stop at the classroom level would have allowed for in-class screenings and relationship-building with the case manager, and this in-class outreach and delivery may have increased the salience and take-up of the intervention. Researchers have used propensity score–matching approaches to study college support programs, and this research approach does not place any constraints on outreach. However, when most students are aware of a support service, there may be important unobserved differences in which students use services that can bias the estimates of the programs' impacts. Randomized encouragement designs have allowed researchers to study college support programs; these studies randomly sent email and text message nudges that encourage students to use services but did not restrict colleges from using established, campuswide outreach approaches. In randomized encouragement designs, the email and text message nudges must be sufficient to generate a large increase in take-up among the students who receive them. As a result, randomized encouragement studies may work best when baseline take-up of services are low, and when the informational nudges are likely to drive student action. Research indicates that it takes time to build awareness and trust of high-stigma services, such as basic needs supports, so outreach to students through email and text message may be more effective as reminders for students to use services they are already familiar with than as the initial and primary form of outreach about basic needs supports.
Some college student support services may best be delivered broadly to all students, while some may be better when focused to a narrower subset of students most likely to benefit from the services. Only a subset of college students face basic needs insecurity; 23 percent face food security and 8 percent are homeless, according to data from the National Postsecondary Student Aid Survey. Those facing the highest levels of basic needs insecurity may be most likely to need the time-intensive screening and case management support that Single Stop offers. The standard practice for Single Stop sites is to broadly promote the services, and students self-select into using the programs. Many Single Stop sites serve a relatively small portion of the overall student population (10 percent or fewer). The Single Stop study used a broad set of indicators on financial need to enter the study, and 86 percent of the college students responding to the survey entered the study.
Researchers should consider focusing studies on the subset of students most likely to need the services. Researchers can leverage administrative data and survey data to identify the students who might need and benefit most from interventions. But there are trade-offs in determining the right study population for college support services that are generally offered to all students. Using broad selection criteria for study inclusion can ensure studies (and colleges) are not missing students who might benefit from the intervention. However, narrowing studies of interventions to students facing the highest rates of need for services will likely contribute to higher rates of take-up. Limiting the study population may be particularly important for interventions that are likely to benefit a smaller subset of students and may help frame intervention effects to the most relevant context.
Research suggests that the accessibility of college support interventions for students and support staff alike is critical to implementation and take-up. When interventions impose substantial administrative burden on staff and students, staff may be less likely to implement interventions with fidelity and students may not want to take the time to engage with the supports.
Researchers and developers can conduct feasibility and pilot studies to identify and address issues with accessibility of interventions to students and staff. Some efficacy studies are conducted on interventions for which relatively little is known about implementation and take-up. For example, most prior studies of Single Stop relied exclusively on administrative data and did not explore how staff and students interacted with the tool and found wide variation in take-up across colleges. This study indicated concerns among some college staff about the willingness of busy students to complete a 20-minute screener. Some college staff reported that the lack of interoperability of the case management system with other student data tracking systems limited their desire to implement the program. Identifying these barriers to accessibility and potentially modifying interventions to address them may be an important step prior to large-scale efficacy studies.
Providing access to public benefits is not the most accessible way for colleges to connect students to resources that meet their needs. Helping individuals with public benefits applications is a long, multistep process, and students who apply for such programs as SNAP are commonly denied. Colleges across the United States (and those in the Single Stop study) have instead broadly scaled to offer student supports, such as food pantries and emergency aid, that are relatively quick and easy to access. Supports that can immediately connect students with resources and require the least administrative burden may be preferred by students, staff, and funders.
Researchers should limit study activities that add administrative burden. Recruitment and consent processes and primary data collection are important and add value to research but also add administrative burden to accessing the supports. For this Single Stop study, the requirement to complete a baseline survey prior to accessing the service's portal may have limited the willingness of students to engage with the tool. Conducting study activities and interventions during class time may be one way to reduce burden on students, but in-class delivery of supports may add burden for faculty if class time is limited and may diverge from how college support services are typically provided.
This publication is part of the RAND research brief series. Research briefs present policy-oriented summaries of individual published, peer-reviewed documents or of a body of published work.
This document and trademark(s) contained herein are protected by law. This representation of RAND intellectual property is provided for noncommercial use only. Unauthorized posting of this publication online is prohibited; linking directly to this product page is encouraged. Permission is required from RAND to reproduce, or reuse in another form, any of its research documents for commercial purposes. For information on reprint and reuse permissions, please visit www.rand.org/pubs/permissions.
RAND is a nonprofit institution that helps improve policy and decisionmaking through research and analysis. RAND's publications do not necessarily reflect the opinions of its research clients and sponsors.