Career Paper

The Role of Social Science in Cybersecurity Analysis

Introduction

Having spent years navigating the high-stakes environments of aviation, I’ve seen how human behavior influences positive or negative outcomes, whether in aviation or elsewhere. As a cybersecurity analyst, the job isn’t just about code or firewalls, it’s about understanding people. This career demands not only technical expertise but relies heavily on social science to predict, prevent, and respond to cyber threats. Social science principles like relativism, skepticism, and empiricism, along with research methods like surveys and field studies, provide a roadmap for analysts in their daily work. This paper explores how cybersecurity analysts rely on these concepts and their interactions with marginalized groups and society as a whole. Drawing from reliable sources, it shows how class material applies to the practices of this profession.

Social Science Principles in Cybersecurity Analysis

Cybersecurity analysts protect systems by studying threats, many of which exploit human behavior. Relativism or the idea that perspectives vary by context, helps analysts understand why a phishing scam might target one group differently than another. For example, attackers might use social engineering tactics, like fake job offers, to exploit economic vulnerabilities in low-income communities (Advisense, 2025). Analysts use relativism to anticipate these tactics, tailoring defenses to specific user groups. Skepticism keeps analysts sharp, questioning whether a “routine” system alert is more benign or a sign of a deeper breach. Empiricism directs the job: analysts rely on data, logs, user activity and attack patterns to build evidence-based conclusions, not assumptions (Gonzalez & Sawicka, 2002).

These principles shape daily routines. A typical day involves reviewing system logs, investigating anomalies, and educating users. Analysts might notice unusual login attempts from a marginalized community’s network, prompting them to dig deeper. Was it a targeted attack exploiting trust in a local institution? Relativism helps outline the context; skepticism pushes for verification; empiricism relies on data to confirm how to move forward. This blending of different principles helps analysts to not rush quickly to a judgment, especially when protecting vulnerable groups who might distrust tech due to historical exclusion (Gonzalez & Sawicka, 2002).

Social Science Research Methods in Practice

Research methods from class such as surveys, field studies, and multimethod research, are critical for analysts. Surveys help test user awareness, like asking employees how often they click suspicious links. This data reveals behavioral risks, especially in underserved communities where digital literacy might lag (Aura Information Security, n.d.). Field studies take analysts into real-world settings, observing how people interact with tech. For example, a study in a rural area might show reliance on outdated software, explaining why attacks succeed there. Multimethod research combines these methods, blending survey results with field observations to build robust defenses.

The dependent variable, cybersecurity outcomes, ties these methods together. Analysts study cyberspace, where human decisions like clicking a link or sharing a password drive vulnerability (Advisense, 2025). Unlike traditional social science, the risks here are immediate: one mistake can cripple a network. This urgency influences how analysts apply research, prioritizing quick, actionable steps over extended studies.

Psychology Theories and Human Behavior

Psychology theories from class reinforce much of an analyst’s work. Cognitive theories explain why users fall for scams; mental shortcuts make people trust familiar looking emails. Neutralization theories show us how attackers justify crimes, like claiming they’re “testing” systems. Behavioral theories help analysts predict user actions, such as reusing weak passwords (Advisense, 2025). These understandings guide training programs, especially for marginalized groups who might face unique pressures, like economic desperation driving riskier online choices.

For example, an analyst might design a campaign to teach phishing awareness, using cognitive theory to create intuitive warnings for non-tech-savvy users (Aura Information Security, n.d.). They’d consider neutralization to counter excuses like “I didn’t think it was a big deal.” This method ensures protections reach people across diverse populations, promoting trust in systems that are frequently seen as selective or inaccessible.

Interactions with Marginalized Groups and Society

Cybersecurity analysts don’t just guard servers; they safeguard people. Marginalized groups, low-income communities, ethnic minorities, or the elderly, face unequal risks. Scammers exploit their limited access to education or tech, creating scams that prey on trust or fear (Gonzalez & Sawicka, 2002). Analysts must account for these types of dynamics, using social science to design defenses that account for many different types of people or societal situations. Sociology highlights how social structures, like poverty, this shapes tech access; psychology shows why certain groups are targeted.

Society benefits greatly when analysts get this right. A breach doesn’t just hit a company, it eats away at public trust, especially among those already skeptical of institutions. Analysts bridge this gap by advocating for user-friendly security, like clear password guides or multilingual alerts (Aura Information Security, n.d.). But challenges remain, over-monitoring can feel invasive or hostile, this could put these strategies at risk by alienating the affected group. Balancing protection with privacy is extremely important, focusing on ethical neutrality and acting without bias toward any one group.

Conclusion

This career interests me because it’s not just about tech, it’s about people. Cybersecurity analysts lean on social science to predict or explain human behavior, from why someone clicks a bad link to how attackers exploit trust. Principles like relativism and empiricism, combined with methods like surveys and field studies, frame their routines. Psychology theories make sense of user choices, guiding protections that must work for everyone. Society relies on analysts to keep systems safe without infringing on people’s freedoms, a critically important balance. As technologies evolve and threats evolve, the analysts must evolve with them, increasing their understanding and methods by which they do their job. Combining tech and social science to stay ahead. This isn’t just a job it’s a mission to protect us from threats online while also preserving our humanity.

References

Advisense. (2025, March 17). Human factor in cybersecurity: Social engineering.
https://advisense.com/2025/03/17/human-factor-in-cybersecurity-social-engineering/

Aura Information Security. (n.d.). The future of cybersecurity: Human-centred design. Aura Research. https://research.aurainfosec.io/advisory/the-future-of-cybersecurity-human-centred-design/

Gonzalez, J. J., & Sawicka, A. (2002). A framework for human factors in information security. IFIP Conference on Human-Computer Interaction, 307–312.
https://pmc.ncbi.nlm.nih.gov/articles/PMC524624/