Cybersecurity and the Social Sciences
Article review 1
This article investigates how accurately people can identify StyleGAN2-generated deepfake
images and whether simple interventions—familiarization or advice—improve detection
accuracy. People are slightly better than chance at detecting deepfakes, but the interventions
tested did not improve performance, and participants were overconfident in their inaccurate
judgments.
Relation/Connection to Social Science Principles
This study strongly incorporates multiple principles of social science:
1. Validity of Observation – The experiment examines how human perception processes
visual stimuli and judges authenticity.
2. Empiricism – The study uses systematic data collection from over 1,000 participants to
test human detection accuracy.
3. Parsimony – Only two simple interventions were tested to determine whether minimal
guidance could help users.
4. Objectivity – Images, advice cues, and scoring metrics were standardized to remove
researcher bias.
5. Theory Construction – The study contributes to theories of human decision-making,
deception detection, and confidence miscalibration.
6. Ethical Neutrality – Deepfake technology is examined without moral judgment,
focusing instead on its observable impact.
7. Cumulative Knowledge – Findings extend prior research on misinformation, AI
manipulation, and human cognitive biases.
Research Question /Hypothesis/ Independent Variable/Dependent Variable
The article addresses the following questions:
1. How accurately can people distinguish real images from StyleGAN2 deepfake images?
2. Do the Familiarization or Advice interventions improve participants’ detection accuracy
or confidence?
Hypotheses
The authors propose that:
Participants given Familiarization or Advice will show improved accuracy compared to
the control group.
Participants may exhibit confidence in their judgments regardless of actual accuracy.
Independent Variables
Presence or absence of Familiarization intervention (viewing 20 deepfake examples
beforehand).
Presence or absence of Advice intervention (receiving specific cues for spotting
deepfakes).
Image type (real vs. StyleGAN2-generated deepfake).
Dependent Variables
Accuracy of identifying each image as real or deepfake.
Confidence ratings associated with judgments.
Use of advice cues reported by participants.
Types of Research Methods used
This study used quantitative experimental research.
Over 1,000 participants were randomly assigned to control, Familiarization, or Advice
conditions.
Participants viewed 100 images and made binary judgments (real or deepfake).
They provided confidence ratings and self-reported cues used to make decisions.
Data were collected through a structured online decision task.
Types of Data Analysis used
The authors used several quantitative analysis techniques:
Accuracy scoring to determine overall performance.
Signal Detection Theory (d′) to measure sensitivity independent of response bias.
ANOVA tests to compare d′ across conditions and assess statistical significance.
Descriptive statistics (mean accuracy, confidence levels, response times).
Comparisons of per-image accuracy to see which stimuli were most or least difficult.
Connections to other Course Concepts
This study directly connects to cybersecurity and social science concepts discussed in class:
Human Vulnerability & Social Engineering: The research shows humans are
overconfident and inaccurate—conditions that attackers exploit.
Misinformation and Trust: Deepfakes challenge trust in digital media, a recurring topic
in the course.
Cognitive Biases: Overconfidence bias appears clearly—participants believed they were
accurate even when they were not.
Media Effects: Exposure to synthetic media influences perception and judgment.
Technological Determinism: The power of StyleGAN2 demonstrates how technology
shapes human behavior and societal risks.
Risk Perception: Users assume they can detect fakes, but the study shows this
perception is flawed.
Cybercrime & Fraud: Deepfakes can enable identity theft, impersonation, and online
scam activities, reinforcing course lessons.
Connections to the Concerns or contributions of Marginalized Groups
The article notes that older adults may be especially vulnerable to deepfake deception due to
reduced familiarity with technology, potential visual limitations, or increased targeting by
fraudsters. While the study’s sample was mostly young adults, it emphasizes the need for future
research focused on older populations to assess their risk.
Overall societal contributions of the study/Conclusion
This study contributes to society by demonstrating that humans are not reliable detectors
of deepfakes and that simple interventions are ineffective. As deepfake technology becomes
more advanced—allowing changes in lighting, background, facial expression, and even matching
an existing person’s identity—the potential for misuse increases. Combined with accessible
audio deepfakes, attackers can construct fully synthetic identities at scale.
Reference
Article Link: Testing human ability to detect ‘deepfake’ images of human faces Sergi D
Bray, Shane D Johnson, Bennett Kleinberg
Journal of Cybersecurity, Volume 9, Issue 1, 2023, tyad011,
Article review 2
evolving nature of digital communication.
• Independent Variables (IV):
o Age groups of social media users
o Types of social media platforms
o Definition criteria used in studies
• Dependent Variable (DV):
o Incidence and severity of cyberbullying incidents
o Reported psychological and social impacts on victims
Research Methods
The study employs systematic literature review, analyzing existing research articles,
surveys, and case studies related to cyberbullying on social media. This approach allows
the authors to synthesize a wide range of data and perspectives, providing a
comprehensive overview of the current state of knowledge on the topic. By reviewing
various studies, the research identifies common themes, discrepancies, and gaps in
existing literature, offering insights into the complexities of defining and measuring
cyberbullying.
Data Types and Analysis
The data analyzed in this study are primarily qualitative, derived from peer-reviewed
journal articles, reports, and surveys. The authors categorize and compare findings from
different studies to identify patterns and inconsistencies. The analysis focuses on the
definitions of cyberbullying, reported prevalence rates, and the methodologies used to
assess its impacts. By examining these aspects, the study highlights the challenges in
creating standardized measures for cyberbullying and underscores the need for more
robust research designs.
Connection to PowerPoint Concepts
digital ethics, and the role of technology in shaping behavior, are directly applicable to the
findings of this study. The research underscores how social media platforms can influence
individual behavior and societal norms, often in ways that are not fully understood or
regulated. It also touches upon ethical considerations in digital interactions, emphasizing
the need for responsible use of technology and the development of policies to protect
users from harm.
Relation to Marginalized Groups
Cyberbullying disproportionately affects marginalized groups, including LGBTQ+
individuals, racial minorities, and those with disabilities. The study highlights how these
groups are often targeted more frequently and severely on social media platforms. The
challenges in defining and measuring cyberbullying can exacerbate the difficulties faced
by these individuals, as their experiences may be underreported or misunderstood. The
research calls for more inclusive and sensitive approaches to studying and addressing
cyberbullying, ensuring that the voices of marginalized communities are heard and their
needs are met.
Contributions to Society
This study contributes to society by providing a clearer understanding of the complexities
surrounding cyberbullying on social media. It emphasizes the importance of standardized
definitions and methodologies in researching digital harassment, which can inform policy
development, educational programs, and support services for victims. By shedding light on
the prevalence and impact of cyberbullying, the research advocates for a more informed
and proactive approach to combating this issue, ultimately fostering safer and more
inclusive online environments.
Conclusion
In conclusion, Ray et al.’s study offers valuable insights into the multifaceted issue of
cyberbullying on social media platforms. By examining definitions, prevalence, and impact
assessment challenges, the research contributes to a deeper understanding of how digital
interactions can affect individuals and society. The study’s findings underscore the need
for standardized approaches in researching cyberbullying and highlight the importance of
addressing the concerns of marginalized groups. As social media continues to play a
equitable digital spaces.
References
https://academic.oup.com/cybersecurity/article/10/1/tyae026/7928395?searchresult=1
Cybersecurity Professional Career Paper
Student Name: Ivan Ofosu
School of Cybersecurity, Old Dominion University
CYSE 201S: Cybersecurity and the Social Sciences
Instructor Name: Diwalkar Yalpi
Date: 11/16/2005
Introduction
With cyber threats constantly evolving, the role of a Cyber Defense Analyst is more crucial than ever. These professionals serve as the first line of defense against cyberattacks, monitoring networks, analyzing threats, and mitigating vulnerabilities to protect critical information. As organizations increasingly rely on digital infrastructure, Cyber Defense Analysts ensure the confidentiality, integrity, and availability of data, helping maintain public trust and operational continuity I will explore how social science principles intersect with the work of Cyber Defense Analysts, the application of key cybersecurity concepts, the implications for marginalized groups, and the societal contributions of the profession.
Social science principles
Cyber Defense Analysts rely heavily on social science research to understand human behaviors that impact cybersecurity. Social science principles, such as behavioral analysis, decision-making, and risk perception, help analysts anticipate how users may respond to phishing attempts, malware, or insider threats (Huntress, n.d.). By understanding organizational culture and human factors, analysts can design effective security training programs, influence user behavior, and implement policies that reduce human error—a major contributor to cybersecurity incidents. Additionally, ethical considerations rooted in social sciences guide analysts in balancing security with privacy and accessibility (CISA, n.d.).
Application of Key Concepts
Cyber Defense Analysts integrate technical and social science concepts in their daily routines. They use Security Information and Event Management (SIEM) tools, intrusion detection systems, and network analysis frameworks to monitor anomalies and detect threats proactively (Huntress, n.d.; CISA, n.d.). Knowledge of risk management, human-computer interaction, and decision-making under uncertainty allows analysts to prioritize threats and coordinate incident response effectively. These professionals also apply threat intelligence to predict adversary behavior, assess organizational vulnerabilities, and develop mitigation strategies, reflecting the application of key cybersecurity concepts learned in class (Merit America, n.d.).
For example, understanding how social engineering exploits human behavior enables analysts to craft policies that prevent successful phishing attacks. They combine this knowledge with technical skills in Python, SQL, and endpoint detection to analyze data and implement security measures, illustrating the interplay between social science and cybersecurity expertise (Huntress, n.d.).
Marginalization
Cyber Defense Analysts must consider the implications of cybersecurity for marginalized groups. Unequal access to technology, limited digital literacy, and increased targeting in cyberattacks make certain populations more vulnerable. Analysts design training and security policies that are accessible and inclusive, addressing disparities in understanding and access (CISA, n.d.). Furthermore, the cybersecurity profession increasingly emphasizes diversity in hiring to better represent and understand the needs of all user populations. By integrating equity into their strategies, analysts help ensure that cybersecurity measures protect everyone fairly, reducing the disproportionate impact of cyber threats on marginalized communities (Merit America, n.d.; Huntress, n.d.).
Career Connection to Society
Cyber Defense Analysts play a critical role in maintaining societal infrastructure and public trust. They protect financial systems, healthcare networks, government databases, and other essential services from disruption. Their work supports compliance with regulations such as HIPAA and GDPR, preventing breaches that could compromise sensitive information or endanger lives (Huntress, n.d.; CISA, n.d.). Public policies governing cybersecurity benefit from the insights these analysts provide, enabling organizations to implement standards that strengthen overall societal resilience.
Scholarly Journal Articles
-
- Source 1: CISA. (n.d.). Cyber defense analyst. Cybersecurity & Infrastructure Security Agency. https://www.cisa.gov/careers/work-rolescyber-defense-analyst
- Source: 2Huntress. (n.d.). What does a security analyst do? Huntress. https://www.huntress.com/cybersecurity-101/topic/what-does-security-analyst-do
- Source 3: Merit America. (n.d.). Cybersecurity career track. Merit America. https://meritamerica.org/career-tracks/cybersecurity/?utm_source=google&utm_medium=cpc&utm_campaign=Google_Grants_Search_NB_Cybersecurity_Broad&utm_term=cyber%20security%20course&gclid=Cj0KCQiAiebIBhDmARIsAE8PGNK5qthmNBa6zR283snqZ3HFO_o0jvCb88lmfMphvC04n1gc44Ps3VoaAlBfEALw_wcB&gad_source=1&gad_campaignid=20766177523&gbraid=0AAAAAo9cod5xdSgf5jYiM7gCBBpKURnQA