Interdisciplinary Security, Privacy, and Interaction Research Lab

@ Duke University, Computer Science Department



 Recent Publications

Exploring Security and Privacy Discourse on Twitter During the 'Justice Pour Nahel' Movement in France 
Hiba Laabadli, Yimeng Ma, Sofia Radkova, and Pardis Emami-Naeini.
CHI 2025
The shooting of Nahel Merzouk in June 2023 ignited widespread protests across France, known as the “Justice Pour Nahel” movement, drawing attention to the privacy and security risks faced by protesters. This study explores the discourse on Twitter during the protests, focusing on digital surveillance and censorship concerns. We analyzed 341 tweets using qualitative methods to understand the security and privacy attitudes and advice shared by French-speaking users. Our findings reveal a strong apprehension toward increased long-term government surveillance and censorship, with limited and often low-tech advice on how to counteract these threats. We highlight the discrepancy between the concerns raised and the available guidance and compare our findings with those of prior work. Grounded in our analysis and informed by prior research, we offer targeted recommendations for activists, policymakers, and researchers to mitigate security and privacy concerns arising from social unrest, both in France and globally.

As the number of smart devices increases in our lives, the data they collect to perform valuable tasks, such as voice assistant requests, comes at the cost of user privacy. To mitigate their privacy impact, emerging usable privacy-aware sensing (UPAS) research has relied on cross-disciplinary approaches that extend past the core focus of broader academic research communities, such as Security & Privacy or Human-Computer Interaction. These works incorporate privacy design principles, whereby systems include safeguards by combining usable privacy (UP) with privacy-aware sensing (PAS) design to protect users’ privacy. To better understand this emerging area of research, we conducted a mixed qualitative and quantitative Systematization of Knowledge (SoK). With a thorough review of pertinent literature, resulting in 114 selected works (reduced from 10,122 across 12 venues), we found that, despite the similarity of these works, many are dispersed across multiple communities, utilize community-specific jargon and keywords, and minimally overlap in design and evaluation approaches, potentially hindering cross-pollination across communities and thereby slowing the growth of this emerging research area. Thus, these factors helped reveal a research gap in this space. We use these findings to present four research themes and provide community and design recommendations to encourage cross-disciplinary UPAS research.

 All Publications

Exploring Security and Privacy Discourse on Twitter During the 'Justice Pour Nahel' Movement in France 
Hiba Laabadli, Yimeng Ma, Sofia Radkova, and Pardis Emami-Naeini.
CHI 2025
The shooting of Nahel Merzouk in June 2023 ignited widespread protests across France, known as the “Justice Pour Nahel” movement, drawing attention to the privacy and security risks faced by protesters. This study explores the discourse on Twitter during the protests, focusing on digital surveillance and censorship concerns. We analyzed 341 tweets using qualitative methods to understand the security and privacy attitudes and advice shared by French-speaking users. Our findings reveal a strong apprehension toward increased long-term government surveillance and censorship, with limited and often low-tech advice on how to counteract these threats. We highlight the discrepancy between the concerns raised and the available guidance and compare our findings with those of prior work. Grounded in our analysis and informed by prior research, we offer targeted recommendations for activists, policymakers, and researchers to mitigate security and privacy concerns arising from social unrest, both in France and globally.

As the number of smart devices increases in our lives, the data they collect to perform valuable tasks, such as voice assistant requests, comes at the cost of user privacy. To mitigate their privacy impact, emerging usable privacy-aware sensing (UPAS) research has relied on cross-disciplinary approaches that extend past the core focus of broader academic research communities, such as Security & Privacy or Human-Computer Interaction. These works incorporate privacy design principles, whereby systems include safeguards by combining usable privacy (UP) with privacy-aware sensing (PAS) design to protect users’ privacy. To better understand this emerging area of research, we conducted a mixed qualitative and quantitative Systematization of Knowledge (SoK). With a thorough review of pertinent literature, resulting in 114 selected works (reduced from 10,122 across 12 venues), we found that, despite the similarity of these works, many are dispersed across multiple communities, utilize community-specific jargon and keywords, and minimally overlap in design and evaluation approaches, potentially hindering cross-pollination across communities and thereby slowing the growth of this emerging research area. Thus, these factors helped reveal a research gap in this space. We use these findings to present four research themes and provide community and design recommendations to encourage cross-disciplinary UPAS research.

Well-intended but half-hearted: Hosts’ consideration of guests’ privacy using smart devices on rental properties 
Sunyup Park, Weijia He, Elmira Deldari, Pardis Emami-Naeini, Danny Yuxing Huang, Jessica Vitak, Yaxing Yao, and Michael Zimmer.
SOUPS 2024
The increased use of smart home devices (SHDs) on short-term rental (STR) properties raises privacy concerns for guests. While previous literature identifies guests’ privacy concerns and the need to negotiate guests’ privacy preferences with hosts, there is a lack of research from the hosts’ perspectives. This paper investigates if and how hosts consider guests’ privacy when using their SHDs on their STRs, to understand hosts’ willingness to accommodate guests’ privacy concerns, a starting point for negotiation. We conducted online interviews with 15 STR hosts (e.g., Airbnb/Vrbo), finding that they generally use, manage, and disclose their SHDs in ways that protect guests’ privacy. However, hosts’ practices fell short of their intentions because of competing needs and goals (i.e., protecting their property versus protecting guests’ privacy). Findings also highlight that hosts do not have proper support from the platforms on how to navigate these competing goals. Therefore, we discuss how to improve platforms’ guidelines/policies to prevent and resolve conflicts with guests and measures to increase engagement from both sides to set the ground for negotiation.

Recent years have seen a sharp increase in the number of underage users in virtual reality (VR), where security and privacy (S&P) risks such as data surveillance and self-disclosure in social interaction have been increasingly prominent. Prior work shows children largely rely on parents to mitigate S&P risks in their technology use. Therefore, understanding parents’ S&P knowledge, perceptions, and practices is critical for identifying the gaps for parents, technology designers, and policymakers to enhance children’s S&P. While such empirical knowledge is substantial in other consumer technologies, it remains largely unknown in the context of VR. To address the gap, we conducted in-depth semi-structured interviews with 20 parents of children under the age of 18 who use VR at home. Our findings highlight parents generally lack S&P awareness due to the perception that VR is still in its infancy. To protect their children’s interactions with VR, parents currently primarily rely on active strategies such as verbal education about S&P. Passive strategies such as using parental controls in VR are not commonly used among our interviewees, mainly due to their perceived technical constraints. Parents also highlight that a multi-stakeholder ecosystem must be established towards more S&P support for children in VR. Based on the findings, we propose actionable S&P recommendations for critical stakeholders, including parents, educators, VR companies, and governments.

"I Deleted It After the Overturn of Roe v. Wade": Understanding Women's Privacy Concerns Toward Period-Tracking Apps in the Post Roe v. Wade Era 
Jiaxun Cao, Hiba Laabadli, Chase H. Mathis, Rebecca D. Stern, and Pardis Emami-Naeini.
CHI 2024, FTC PrivacyCon 2024
The overturn of Roe v. Wade has taken away the constitutional right to abortion. Prior work shows that period-tracking apps’ data practices can be used to detect pregnancy and abortion, hence putting women at risk of being prosecuted. It is unclear how much women know about the privacy practices of such apps and how concerned they are after the overturn. Such knowledge is critical to designing effective strategies for stakeholders to enhance women's reproductive privacy. We conducted an online 183-participant vignette survey with US women from states with diverse policies on abortion. Participants were significantly concerned about the privacy practices of the period-tracking apps, such as data access by law enforcement and third parties. However, participants felt uninformed and powerless about risk mitigation practices. We provide several recommendations to enhance women's privacy awareness toward their period-tracking practices.

Internet of Things Security and Privacy Labels Should Empower Consumers 
Lorrie Cranor, Yuvraj Agarwal, and Pardis Emami-Naeini.
Communications of the ACM 2024

Understanding People’s Concerns and Attitudes Toward Smart Cities 
Pardis Emami-Naeini, Joe Breda, Wei Dai, Tadayoshi Kohno, Kim Laine, Shwetak Patel, and Franziska Roesner.
CHI 2023
Designing privacy-respecting and human-centric smart cities requires a careful investigation of people’s attitudes and concerns toward city-wide data collection scenarios. To capture a holistic view, we carried out this investigation in two phases. We first surfaced people’s understanding, concerns, and expectations toward smart city scenarios by conducting 21 semi-structured interviews with people in underserved communities. We complemented this in-depth qualitative study with a 348-participant online survey of the general population to quantify the significance of smart city factors (e.g., type of collected data) on attitudes and concerns. Depending on demographics, privacy and ethics were the two most common types of concerns among participants. We found the type of collected data to have the most and the retention time to have the least impact on participants’ perceptions and concerns about smart cities. We highlight key takeaways and recommendations for city stakeholders to consider when designing inclusive and protective smart cities.

Exploring the Impact of Ethnicity on Susceptibility to Voice Phishing 
Aritra Ray, Sohini Saha, Krishnendu Chakrabarty, Leslie Collins, Kyle Lafata, and Pardis Emami-Naeini
SOUPS 2023
Spear phishing is a common, targeted phishing where the attacker uses targets’ relevant information to increase the effectiveness of their attacks. We explore the impact of people’s native language accents on their susceptibility to voice phishing, where the attacker asks for users’ financial information (e.g., credit card number). We designed a mixed-methods survey and recruited 140 Prolific participants. Using an AI voice generator, we created two types of English audio prompts (e.g., new Medicare card, parcel delivery) with four types of accents (e.g., Chinese, Hindi). Each participant was presented with two audio prompts, one with their native language accent and one with no accent (US-English). Our findings showed that, except for Hindi native speakers, participants perceived the no-accent (US-English) prompts as more trustworthy and were significantly more willing to share their sensitive financial information when the prompts were presented in a US-English accent.

Abuse Vectors: A Framework for Conceptualizing IoT-Enabled Interpersonal Abuse 
Sophie Stephenson, Majed Almansoori, Pardis Emami-Naeini, Danny Yuxing Huang, and Rahul Chatterjee.
USENIX 2023
Tech-enabled interpersonal abuse (IPA) is a pervasive problem. Abusers, often intimate partners, use tools such as spyware to surveil and harass victim-survivors. Unfortunately, anecdotal evidence suggests that smart, Internet-connected devices such as home thermostats, cameras, and Bluetooth item finders may similarly be used against victim-survivors of IPA. To tackle abuse involving smart devices, it is vital that we understand the ecosystem of smart devices that enable IPA. Thus, in this work, we conduct a large-scale qualitative analysis of the smart devices used in IPA. We systematically crawl Google Search results to uncover web pages discussing how abusers use smart devices to enact IPA. By analyzing these web pages, we identify 32 devices used for IPA and detail the varied strategies abusers use for spying and harassment via these devices. Then, we design a framework—abuse vectors— which conceptualizes IoT-enabled IPA as four overarching patterns: Covert Spying, Unauthorized Access, Repurposing, and Intended Use. Using this lens, we pinpoint the necessary solutions required to address each vector of IoT abuse and encourage the security community to take action.

Are Consumers Willing to Pay for Security and Privacy of IoT Devices? 
Pardis Emami-Naeini, Janarth Dheenadhayalan, Yuvraj Agarwal, and Lorrie Cranor.
USENIX 2023
Internet of Things (IoT) device manufacturers provide little information to consumers about their security and data handling practices. Therefore, IoT consumers cannot make informed purchase choices around security and privacy. While prior research has found that consumers would likely consider security and privacy when purchasing IoT devices, past work lacks empirical evidence as to whether they would actually pay more to purchase devices with enhanced security and privacy. To fill this gap, we conducted a two-phase incentive-compatible online study with 180 Prolific participants. We measured the impact of five security and privacy factors (e.g., access control) on participants' purchase behaviors when presented individually or together on an IoT label. Participants were willing to pay a significant premium for devices with better security and privacy practices. The biggest price differential we found was for de-identified rather than identifiable cloud storage. Mainly due to its usability challenges, the least valuable improvement for participants was to have multi-factor authentication as opposed to passwords. Based on our findings, we provide recommendations on creating more effective IoT security and privacy labeling programs.

Victim-survivors of intimate partner violence (IPV) are facing a new technological threat: Abusers are leveraging IoT devices such as smart thermostats, hidden cameras, and GPS trackers to spy on and harass victim-survivors. Though prior work provides a foundation of what IoT devices can be involved in intimate partner violence, we lack a detailed understanding of the factors which contribute to this IoT abuse, the strategies victim-survivors use to mitigate IoT abuse, and the barriers they face along the way. Without this information, it is challenging to design effective solutions to stop IoT abuse. To fill this gap, we interviewed 20 participants with firsthand or secondhand experience with IoT abuse. Our interviews captured 39 varied instances of IoT abuse, from surveillance with hidden GPS trackers to harassment with smart thermostats and light bulbs. They also surfaced 21 key barriers victim-survivors face while coping with IoT abuse. For instance, victim-survivors struggle to find proof of the IoT abuse they experience, which makes mitigations challenging. Even with proof, victim-survivors face barriers mitigating the abuse; for example, mitigation is all but impossible for victim-survivors living with an abusive partner. Our findings pinpoint several solutions to combat IoT abuse, including increased transparency of IoT devices, updated IoT access control protocols, and raising awareness of IoT abuse.

Skilled or Gullible? Gender Stereotypes Related to Computer Security and Privacy 
Miranda Wei, Pardis Emami-Naeini, Tadayoshi Kohno, and Franziska Roesner.
IEEE S&P 2023
Gender stereotypes remain common in U.S. society and harm people of all genders. Focusing on binary genders (women and men) as a first investigation, we empirically study gender stereotypes related to computer security and privacy. We used Prolific to conduct two surveys with U.S. participants that aimed to: (1) surface potential gender stereotypes related to security and privacy (N = 202), and (2) assess belief in gender stereotypes about security and privacy engagement, personal characteristics, and behaviors (N = 190). We find that stereotype beliefs are significantly correlated with participants’ gender as well as a level of sexism, and we delve into the justifications our participants offered for their beliefs. Beyond scientifically studying the existence and prevalence of such stereotypes, we describe potential implications, including biasing crowd worker-facilitated user research. Further, our work lays a foundation for deeper investigations of the impacts of stereotypes in computer security and privacy, as well as stereotypes across the whole gender and identity spectrum.

"Dump it, Destroy it, Send it to Data Heaven": Blind People's Expectations for Visual Privacy in Visual Assistance Technologies 
Abigale Stangl, Emma Sadjo, Pardis Emami-Naeini, Yang Wang, Danna Gurari, and Leah Findlater.
W4A 2023
Visual assistance technologies provide people who are blind with access to information about their visual surroundings by digitally connecting them to remote humans or artificial intelligence systems that describe visual content such as objects, people, scenes, and text observed in their live image/video feeds. Prior work has revealed that users have concerns about how such technologies handle private visual content captured in their image/video feeds. Yet, it remains unclear how users want technologies to manage such private content. To fill this gap, we interviewed 16 totally blind individuals to learn about their expectations for visual privacy when using visual assistance technologies. Our findings reveal three overarching user-centered expectations associated with visual privacy-preservation in this domain, as well as the broader ethical challenges involved with developing AI-based privacy-preserving visual assistance technologies.

Exploring Deceptive Design Patterns in Voice Interfaces 
Kentrell Owens, Johanna Gunawan, Dave Choffnes, Pardis Emami-Naeini, Tadayoshi Kohno, and Franziska Roesner.
EuroUSEC 2022
Deceptive design patterns (sometimes called “dark patterns”) are user interface design elements that may trick, deceive, or mislead users into behaviors that often benefit the party implementing the design over the end user. Prior work has taxonomized, investigated, and measured the prevalence of such patterns primarily in visual user interfaces (e.g., on websites). However, as the ubiquity of voice assistants and other voice-assisted technologies increases, we must anticipate how deceptive designs will be (and indeed, are already) deployed in voice interactions. This paper makes two contributions towards characterizing and surfacing deceptive design patterns in voice interfaces. First, we make a conceptual contribution, identifying key characteristics of voice interfaces that may enable deceptive design patterns, and surfacing existing and theoretical examples of such patterns. Second, we present the findings from a scenario-based user survey with 93 participants, in which we investigate participants’ perceptions of voice interfaces that we consider to be both deceptive and non-deceptive.

You, Me, and IoT: How Internet-connected Consumer Devices Affect Interpersonal Relationships 
Noah Apthorpe, Pardis Emami-Naeini, Arunesh Mathur, Marshini Chetty, and Nick Feamster.
ACM Transactions on Internet of Things 2022
Internet-connected consumer devices have rapidly increased in popularity; however, relatively little is known about how these technologies are affecting interpersonal relationships in multi-occupant households. In this study, we conduct 13 semi-structured interviews and survey 508 individuals from a variety of backgrounds to discover and categorize how consumer IoT devices are affecting interpersonal relationships in the United States. We highlight several themes, providing exploratory data about the pervasiveness of interpersonal costs and benefits of consumer IoT devices. These results inform follow-up studies and design priorities for future IoT technologies to amplify positive and reduce negative interpersonal effects.

Understanding Privacy Attitudes and Concerns Towards Remote Communications During the COVID-19 Pandemic 
Pardis Emami-Naeini, Tiona Francisco, Tadayoshi Kohno, and Franziska Roesner.
SOUPS 2021
Since December 2019, the COVID-19 pandemic has caused people around the world to exercise social distancing, which has led to an abrupt rise in the adoption of remote communications for working, socializing, and learning from home. As remote communications will outlast the pandemic, it is crucial to protect users’ security and respect their privacy in this unprecedented setting, and that requires a thorough understanding of their behaviors, attitudes, and concerns toward various aspects of remote communications. To this end, we conducted an online study with 220 worldwide Prolific participants. We found that privacy and security are among the most frequently mentioned factors impacting participants’ attitude and comfort level with conferencing tools and meeting locations. Open-ended responses revealed that most participants lacked autonomy when choosing conferencing tools or using microphone/webcam in their remote meetings, which in several cases contradicted their personal privacy and security preferences. Based on our findings, we distill several recommendations on how employers, educators, and tool developers can inform and empower users to make privacy-protective decisions when engaging in remote communications.

Which Privacy and Security Attributes Most Impact Consumers’ Risk Perception and Willingness to Purchase IoT Devices? 
Pardis Emami-Naeini, Tiona Francisco, Tadayoshi Kohno, and Franziska Roesner.
IEEE S&P 2021
In prior work, researchers proposed an Internet of Things (IoT) security and privacy label akin to a food nutrition label, based on input from experts. We conducted a survey with 1,371 Mechanical Turk (MTurk) participants to test the effectiveness of each of the privacy and security attribute-value pairs proposed in that prior work along two key dimensions: ability to convey risk to consumers and impact on their willingness to purchase an IoT device. We found that the values intended to communicate increased risk were generally perceived that way by participants. For example, we found that consumers perceived more risk when a label conveyed that data would be sold to third parties than when it would not be sold at all, and that consumers were more willing to purchase devices when they knew that their data would not be retained or shared with others. However, participants’ risk perception did not always align with their willingness to purchase, sometimes due to usability concerns. Based on our findings, we propose actionable recommendations on how to more effectively present privacy and security attributes on an IoT label to better communicate risk to consumers.

An Informative Security and Privacy “Nutrition” Label for Internet of Things Devices 
Pardis Emami-Naeini, Janarth Dheenadhayalan, Yuvraj Agarwal, and Lorrie Cranor.
IEEE S&P 2021
Consumers are concerned about the security and privacy of their Internet of Things (IoT) devices. However, they cannot easily learn about their devices’ security protections and data practices before purchasing them. We designed a usable and informative IoT security and privacy label.

Ask the Experts: What Should Be on an IoT Privacy and Security Label? 
Pardis Emami-Naeini, Yuvraj Agarwal, Lorrie Cranor, and Hanan Hibshi.
IEEE S&P 2020
Information about the privacy and security of Internet of Things (IoT) devices is not readily available to consumers who want to consider it before making purchase decisions. While legislators have proposed adding succinct, consumer-accessible labels, they do not provide guidance on the content of these labels. In this paper, we report on the results of a series of interviews and surveys with privacy and security experts, as well as consumers, where we explore and test the design space of the content to include on an IoT privacy and security label. We conduct an expert elicitation study by following a three-round Delphi process with 22 privacy and security experts to identify the factors that experts believed are important for consumers when comparing the privacy and security of IoT devices to inform their purchase decisions. Based on how critical experts believed each factor is in conveying risk to consumers, we distributed these factors across two layers—a primary layer to display on the product package itself or prominently on a website, and a secondary layer available online through a web link or a QR code. We report on the experts’ rationale and arguments used to support their choice of factors. Moreover, to study how consumers would perceive the privacy and security information specified by experts, we conducted a series of semi-structured interviews with 15 participants, who had purchased at least one IoT device (smart home device or wearable). Based on the results of our expert elicitation and consumer studies, we propose a prototype privacy and security label to help consumers make more informed IoT-related purchase decisions.

Exploring How Privacy and Security Factor into IoT Device Purchase Behavior 
Pardis Emami-Naeini, Yuvraj Agarwal, Lorrie Cranor, and Henry Dixon.
CHI 2019
Despite growing concerns about security and privacy of Internet of Things (IoT) devices, consumers generally do not have access to security and privacy information when purchasing these devices. We interviewed 24 participants about IoT devices they purchased. While most had not considered privacy and security prior to purchase, they reported becoming concerned later due to media reports, opinions shared by friends, or observing unexpected device behavior. Those who sought privacy and security information before purchase, reported that it was difficult or impossible to find. We asked interviewees to rank factors they would consider when purchasing IoT devices; after features and price, privacy and security were ranked among the most important. Finally, we showed interviewees our prototype privacy and security label. Almost all found it to be accessible and useful, encouraging them to incorporate privacy and security in their IoT purchase decisions.

The Influence of Friends and Experts on Privacy Decision Making in IoT Scenarios 
Pardis Emami-Naeini, Martin Degeling, Richard Chow, Lujo Bauer, Lorrie Cranor, Mohammad Reza Haghighat, and Heather Patterson.
CSCW 2018
As increasingly many Internet-of-Things (IoT) devices collect personal data, users face more privacy decisions. Personal privacy assistants can provide social cues and help users make informed decisions by presenting information about how others have decided in similar cases. To better understand which social cues are relevant and whose recommendations users are more likely to follow, we presented 1000 online participants with nine IoT data-collection scenarios. Some participants were told the percentage of experts or friends who allowed data collection in each scenario, while other participants were provided no social cue. At the conclusion of each scenario, participants were asked whether they would allow the described data collection. Our results help explain under what circumstances users are more or less likely to be swayed by the reported behavior of others in similar scenarios. For example, our results indicate that when friends denied data collection, our participants were more influenced than when friends allowed data collection. On the other hand, participants were more influenced by experts when they allowed data collection. We also observed that influence could get stronger or wear off when participants were exposed to a sequence of scenarios. For example, when experts and friends repeatedly allowed data collection in scenarios with clear risk or denied it in scenarios with clear benefits, participants were less likely to be influenced by them in subsequent scenarios.

User Behaviors and Attitudes Under Password Expiration Policies 
Pardis Emami-Naeini, Tiona Francisco, Tadayoshi Kohno, and Franziska Roesner.
SOUPS 2018
Policies that require employees to update their passwords regularly have become common at universities and government organizations. However, prior work has suggested that forced password expiration might have limited security benefits, or could even cause harm. For example, users might react to forced password expiration by picking easy-to-guess passwords or reusing passwords from other accounts. We conducted two surveys on Mechanical Turk through which we examined people's self-reported behaviors in using and updating workplace passwords, and their attitudes toward four previously studied password-management behaviors, including periodic password changes. Our findings suggest that forced password expiration might not have some of the negative effects that were feared nor positive ones that were hoped for. In particular, our results indicate that participants forced to change passwords did not resort to behaviors that would significantly decrease password security; on the other hand, their self-reported strategies for creating replacement passwords suggest that those passwords were no stronger than the ones they replaced. We also found that repeating security advice causes users to internalize it, even if evidence supporting the advice is scant. Our participants overwhelmingly reported that periodically changing passwords was important for account security, though not as important as other factors that have been more convincingly shown to influence password strength.

Let’s Go in For a Closer Look: Observing Passwords in Their Natural Habitat 
Sarah Pearman, Jeremy Thomas, Pardis Emami-Naeini, Hana Habib, Lujo Bauer, Nicolas Christin, Lorrie Cranor, Serge Egelman, and Alain Forget.
CCS 2017
Text passwords---a frequent vector for account compromise, yet still ubiquitous---have been studied for decades by researchers attempting to determine how to coerce users to create passwords that are hard for attackers to guess but still easy for users to type and memorize. Most studies examine one password or a small number of passwords per user, and studies often rely on passwords created solely for the purpose of the study or on passwords protecting low-value accounts. These limitations severely constrain our understanding of password security in practice, including the extent and nature of password reuse, password behaviors specific to categories of accounts (e.g., financial websites), and the effect of password managers and other privacy tools. In this paper we report on an in situ study of 154 participants over an average of 147 days each. Participants' computers were instrumented---with careful attention to privacy---to record detailed information about password characteristics and usage, as well as man other computing behaviors such as use of security and privacy web browser extensions. This data allows a more accurate analysis of password characteristics and behaviors across the full range of participants' web-based accounts. Examples of our findings are that the use of symbols and digits in passwords predicts increased likelihood of reuse, while increased password strength predicts decreased likelihood of reuse; that password reuse is more prevalent than previously believed, especially when partial reuse is taken into account; and that password managers may have no impact on password reuse or strength. We also observe that users can be grouped into a handful of behavioral clusters, representative of various password management strategies. Our findings suggest that once a user needs to manage a larger number of passwords, they cope by partially and exactly reusing passwords across most of their accounts.

Privacy Expectations and Preferences in an IoT World 
Pardis Emami-Naeini, Sruti Bhagavatula, Hana Habib, Martin Degeling, Lujo Bauer, Lorrie Cranor, and Norman Sadeh
SOUPS 2017
With the rapid deployment of Internet of Things (IoT) technologies and the variety of ways in which IoT-connected sensors collect and use personal data, there is a need for transparency, control, and new tools to ensure that individual privacy requirements are met. To develop these tools, it is important to better understand how people feel about the privacy implications of IoT and the situations in which they prefer to be notified about data collection. We report on a 1,007-participant vignette study focusing on privacy expectations and preferences as they pertain to a set of 380 IoT data collection and use scenarios. Participants were presented with 14 scenarios that varied across eight categorical factors, including the type of data collected (e.g. location, biometrics, temperature), how the data is used (e.g., whether it is shared, and for what purpose), and other attributes such as the data retention period. Our findings show that privacy preferences are diverse and context-dependent; participants were more comfortable with data being collected in public settings rather than in private places and are more likely to consent to data being collected for uses they find beneficial. They are less comfortable with the collection of biometrics (e.g. fingerprints) than environmental data (e.g. room temperature, physical presence). We also find that participants are more likely to want to be notified about data practices that they are uncomfortable with. Finally, our study suggests that after observing individual decisions in just three data-collection scenarios, it is possible to predict their preferences for the remaining scenarios, with our model achieving an average accuracy of up to 86%.

Design and Evaluation of a Data-Driven Password Meter 
Blase Ur, Felicia Alfieri, Maung Aung, Lujo Bauer, Nicolas Christin, Jessica Colnago, Lorrie Cranor, Henry Dixon, Pardis Emami-Naeini, Hana Habib, Noah Johnson, and William Melicher.
CHI 2017
Despite their ubiquity, many password meters provide inaccurate strength estimates. Furthermore, they do not explain to users what is wrong with their password or how to improve it. We describe the development and evaluation of a data-driven password meter that provides accurate strength measurement and actionable, detailed feedback to users. This meter combines neural networks and numerous carefully combined heuristics to score passwords and generate data-driven text feedback about the user's password. We describe the meter's iterative development and final design. We detail the security and usability impact of the meter's design dimensions, examined through a 4,509-participant online study. Under the more common password-composition policy we tested, we found that the data-driven meter with detailed feedback led users to create more secure, and no less memorable, passwords than a meter with only a bar as a strength indicator.

Towards Privacy-Aware Smart Buildings: Capturing Communicating, and Enforcing Privacy Policies and Preferences 
Primal Pappachan, Martin Degeling, Roberto Yus, Anupam Das, Sruti Bhagavatula, William Melicher, Pardis Emami-Naeini, Shikun Zhang, Lujo Bauer, Alfred Kobsa, Sharad Mehrotra, Norman Sadeh, and Nalini Venkatasubramanian.
ICDCSW 2017
The Internet of Things (IoT) is changing the way we interact with our environment in domains as diverse as health, transportation, office buildings and our homes. In smart building environments, information captured about the building and its inhabitants will aid in development of services that improve productivity, comfort, social interactions, safety, energy savings and more. However, by collecting and sharing information about building’s inhabitants and their activities, these services also open the door to privacy risks. In this paper, we introduce a framework where IoT Assistants capture and manage the privacy preferences of their users and communicate them to privacy-aware smart buildings, which enforce them when collecting user data or sharing it with building services. We outline elements necessary to support such interactions and also discuss important privacy policy attributes that need to be captured. This includes looking at attributes necessary to describe – (1) the data collection and sharing practices associated with deployed sensors and services in smart buildings as well as (2) the privacy preferences to help users manage their privacy in such environments.