CYSE 201S 

Cybersecurity and the Social Sciences 

Article review 1

Introduction/BLUFThis article investigates how accurately people can identify StyleGAN2-generated deepfakeimages and whether simple interventions—familiarization or advice—improve detectionaccuracy. People are slightly better than chance at detecting deepfakes, but the interventionstested did not improve performance, and participants were overconfident in their inaccuratejudgments.Relation/Connection to Social Science PrinciplesThis study strongly incorporates multiple principles of social science:1. Validity of Observation – The experiment examines how human perception processesvisual stimuli and judges authenticity.2. Empiricism – The study uses systematic data collection from over 1,000 participants totest human detection accuracy.3. Parsimony – Only two simple interventions were tested to determine whether minimalguidance could help users.4. Objectivity – Images, advice cues, and scoring metrics were standardized to removeresearcher bias.5. Theory Construction – The study contributes to theories of human decision-making,deception detection, and confidence miscalibration.6. Ethical Neutrality – Deepfake technology is examined without moral judgment,focusing instead on its observable impact.7. Cumulative Knowledge – Findings extend prior research on misinformation, AImanipulation, and human cognitive biases.
 
 
 
 
 
 
 
3Research Question /Hypothesis/ Independent Variable/Dependent VariableThe article addresses the following questions:1. How accurately can people distinguish real images from StyleGAN2 deepfake images?2. Do the Familiarization or Advice interventions improve participants’ detection accuracyor confidence?HypothesesThe authors propose that: Participants given Familiarization or Advice will show improved accuracy compared tothe control group. Participants may exhibit confidence in their judgments regardless of actual accuracy.Independent Variables Presence or absence of Familiarization intervention (viewing 20 deepfake examplesbeforehand). Presence or absence of Advice intervention (receiving specific cues for spottingdeepfakes). Image type (real vs. StyleGAN2-generated deepfake).Dependent Variables Accuracy of identifying each image as real or deepfake. Confidence ratings associated with judgments. Use of advice cues reported by participants.Types of Research Methods usedThis study used quantitative experimental research.
 
 
 
 
 
 
 
4 Over 1,000 participants were randomly assigned to control, Familiarization, or Adviceconditions. Participants viewed 100 images and made binary judgments (real or deepfake). They provided confidence ratings and self-reported cues used to make decisions. Data were collected through a structured online decision task.Types of Data Analysis usedThe authors used several quantitative analysis techniques: Accuracy scoring to determine overall performance. Signal Detection Theory (d′) to measure sensitivity independent of response bias. ANOVA tests to compare d′ across conditions and assess statistical significance. Descriptive statistics (mean accuracy, confidence levels, response times). Comparisons of per-image accuracy to see which stimuli were most or least difficult.Connections to other Course ConceptsThis study directly connects to cybersecurity and social science concepts discussed in class: Human Vulnerability & Social Engineering: The research shows humans areoverconfident and inaccurate—conditions that attackers exploit. Misinformation and Trust: Deepfakes challenge trust in digital media, a recurring topicin the course. Cognitive Biases: Overconfidence bias appears clearly—participants believed they wereaccurate even when they were not. Media Effects: Exposure to synthetic media influences perception and judgment.
 
 
 
 
 
 
 
5 Technological Determinism: The power of StyleGAN2 demonstrates how technologyshapes human behavior and societal risks. Risk Perception: Users assume they can detect fakes, but the study shows thisperception is flawed. Cybercrime & Fraud: Deepfakes can enable identity theft, impersonation, and onlinescam activities, reinforcing course lessons.Connections to the Concerns or contributions of Marginalized GroupsThe article notes that older adults may be especially vulnerable to deepfake deception due toreduced familiarity with technology, potential visual limitations, or increased targeting byfraudsters. While the study’s sample was mostly young adults, it emphasizes the need for futureresearch focused on older populations to assess their risk.Overall societal contributions of the study/ConclusionThis study contributes to society by demonstrating that humans are not reliable detectorsof deepfakes and that simple interventions are ineffective. As deepfake technology becomesmore advanced—allowing changes in lighting, background, facial expression, and even matchingan existing person’s identity—the potential for misuse increases. Combined with accessibleaudio deepfakes, attackers can construct fully synthetic identities at scale.
 
 
 
 
 
 
 
6ReferenceArticle Link: Testing human ability to detect ‘deepfake’ images of human faces Sergi DBray, Shane D Johnson, Bennett KleinbergJournal of Cybersecurity, Volume 9, Issue 1, 2023, tyad011,



Article review 2

 
 
 
3. Assessing the impacts of cyberbullying is hindered by the challenges of theevolving nature of digital communication. Independent Variables (IV):o Age groups of social media userso Types of social media platformso Definition criteria used in studies Dependent Variable (DV):o Incidence and severity of cyberbullying incidentso Reported psychological and social impacts on victimsResearch MethodsThe study employs systematic literature review, analyzing existing research articles,surveys, and case studies related to cyberbullying on social media. This approach allowsthe authors to synthesize a wide range of data and perspectives, providing acomprehensive overview of the current state of knowledge on the topic. By reviewingvarious studies, the research identifies common themes, discrepancies, and gaps inexisting literature, offering insights into the complexities of defining and measuringcyberbullying.Data Types and AnalysisThe data analyzed in this study are primarily qualitative, derived from peer-reviewedjournal articles, reports, and surveys. The authors categorize and compare findings fromdifferent studies to identify patterns and inconsistencies. The analysis focuses on thedefinitions of cyberbullying, reported prevalence rates, and the methodologies used toassess its impacts. By examining these aspects, the study highlights the challenges increating standardized measures for cyberbullying and underscores the need for morerobust research designs.Connection to PowerPoint Concepts
 
 
 
 
 
 
 
The concepts discussed in the course PowerPoint presentations, such as social norms,digital ethics, and the role of technology in shaping behavior, are directly applicable to thefindings of this study. The research underscores how social media platforms can influenceindividual behavior and societal norms, often in ways that are not fully understood orregulated. It also touches upon ethical considerations in digital interactions, emphasizingthe need for responsible use of technology and the development of policies to protectusers from harm.Relation to Marginalized GroupsCyberbullying disproportionately affects marginalized groups, including LGBTQ+individuals, racial minorities, and those with disabilities. The study highlights how thesegroups are often targeted more frequently and severely on social media platforms. Thechallenges in defining and measuring cyberbullying can exacerbate the difficulties facedby these individuals, as their experiences may be underreported or misunderstood. Theresearch calls for more inclusive and sensitive approaches to studying and addressingcyberbullying, ensuring that the voices of marginalized communities are heard and theirneeds are met.Contributions to SocietyThis study contributes to society by providing a clearer understanding of the complexitiessurrounding cyberbullying on social media. It emphasizes the importance of standardizeddefinitions and methodologies in researching digital harassment, which can inform policydevelopment, educational programs, and support services for victims. By shedding light onthe prevalence and impact of cyberbullying, the research advocates for a more informedand proactive approach to combating this issue, ultimately fostering safer and moreinclusive online environments.ConclusionIn conclusion, Ray et al.’s study offers valuable insights into the multifaceted issue ofcyberbullying on social media platforms. By examining definitions, prevalence, and impactassessment challenges, the research contributes to a deeper understanding of how digitalinteractions can affect individuals and society. The study’s findings underscore the needfor standardized approaches in researching cyberbullying and highlight the importance ofaddressing the concerns of marginalized groups. As social media continues to play a
 
 
 
 
 
 
 
central role in daily life, this research is crucial in guiding efforts to create safer and moreequitable digital spaces.Referenceshttps://academic.oup.com/cybersecurity/article/10/1/tyae026/7928395?searchresult=1
 

Cybersecurity Professional Career Paper

 

Student Name: Ivan Ofosu

School of Cybersecurity, Old Dominion University

CYSE 201S: Cybersecurity and the Social Sciences

Instructor Name: Diwalkar Yalpi

Date: 11/16/2005

 

 

 

 

 

 

 

 

 

 

 

Introduction

With cyber threats constantly evolving, the role of a Cyber Defense Analyst is more crucial than ever. These professionals serve as the first line of defense against cyberattacks, monitoring networks, analyzing threats, and mitigating vulnerabilities to protect critical information. As organizations increasingly rely on digital infrastructure, Cyber Defense Analysts ensure the confidentiality, integrity, and availability of data, helping maintain public trust and operational continuity I will explore how social science principles intersect with the work of Cyber Defense Analysts, the application of key cybersecurity concepts, the implications for marginalized groups, and the societal contributions of the profession.

Social science principles

Cyber Defense Analysts rely heavily on social science research to understand human behaviors that impact cybersecurity. Social science principles, such as behavioral analysis, decision-making, and risk perception, help analysts anticipate how users may respond to phishing attempts, malware, or insider threats (Huntress, n.d.). By understanding organizational culture and human factors, analysts can design effective security training programs, influence user behavior, and implement policies that reduce human error—a major contributor to cybersecurity incidents. Additionally, ethical considerations rooted in social sciences guide analysts in balancing security with privacy and accessibility (CISA, n.d.).

Application of Key Concepts

Cyber Defense Analysts integrate technical and social science concepts in their daily routines. They use Security Information and Event Management (SIEM) tools, intrusion detection systems, and network analysis frameworks to monitor anomalies and detect threats proactively (Huntress, n.d.; CISA, n.d.). Knowledge of risk management, human-computer interaction, and decision-making under uncertainty allows analysts to prioritize threats and coordinate incident response effectively. These professionals also apply threat intelligence to predict adversary behavior, assess organizational vulnerabilities, and develop mitigation strategies, reflecting the application of key cybersecurity concepts learned in class (Merit America, n.d.).

For example, understanding how social engineering exploits human behavior enables analysts to craft policies that prevent successful phishing attacks. They combine this knowledge with technical skills in Python, SQL, and endpoint detection to analyze data and implement security measures, illustrating the interplay between social science and cybersecurity expertise (Huntress, n.d.).

 

Marginalization

Cyber Defense Analysts must consider the implications of cybersecurity for marginalized groups. Unequal access to technology, limited digital literacy, and increased targeting in cyberattacks make certain populations more vulnerable. Analysts design training and security policies that are accessible and inclusive, addressing disparities in understanding and access (CISA, n.d.). Furthermore, the cybersecurity profession increasingly emphasizes diversity in hiring to better represent and understand the needs of all user populations. By integrating equity into their strategies, analysts help ensure that cybersecurity measures protect everyone fairly, reducing the disproportionate impact of cyber threats on marginalized communities (Merit America, n.d.; Huntress, n.d.).

Career Connection to Society

Cyber Defense Analysts play a critical role in maintaining societal infrastructure and public trust. They protect financial systems, healthcare networks, government databases, and other essential services from disruption. Their work supports compliance with regulations such as HIPAA and GDPR, preventing breaches that could compromise sensitive information or endanger lives (Huntress, n.d.; CISA, n.d.). Public policies governing cybersecurity benefit from the insights these analysts provide, enabling organizations to implement standards that strengthen overall societal resilience.

Scholarly Journal Articles