Description of the Course
PHIL 355E gave me a structured space to explore the ethical and cultural implications of cybersecurity, privacy, and emerging technologies like AI. It wasn’t just about technical systems—it was about people. The course helped me look at how different communities experience surveillance, bias, and digital risk differently based on race, location, access, and history.
As someone raised across two very different cultures—Nigeria and the United States—this course gave language to things I had felt intuitively: the gaps in global access, the power dynamics in tech development, and the ethical weight of the systems we build.
⸻
Work Samples
• Final Paper – “Free Speech vs. Deepfakes: Who Gets Protected in the Age of AI?”
This paper explored the tension between First Amendment protections and the real-world harm caused by generative AI. It looked at who is most likely to be targeted or misrepresented and how policy could better protect marginalized voices.
[Upload as PDF or embed viewer]
• Discussion Posts & Weekly Reflections
I consistently reflected on how ethical theories connect to real-world tech use, and how cultural values shape what we see as “normal” online.
[Optional: screenshot excerpts or combine into one media PDF]
⸻
Reflection
This course helped me step back from the code and think more deeply about why we build what we build—and who it impacts. One of the most important takeaways was realizing that technology is never neutral. Even in cybersecurity, the policies we write, the models we train, and the systems we secure all reflect cultural priorities and biases.
I saw how my own identity shapes how I view systems—coming from a culture where privacy is often communal, then shifting into a society where data ownership is highly individual. This tension helped me understand how to adapt my communication, challenge assumptions, and approach tech design with more cultural awareness.
⸻
Skills Developed
Through this course, I developed several employer-valued skills:
• Obtain and process information: Evaluated ethical frameworks across cultural contexts to support policy suggestions in AI and cybersecurity.
• Communicate verbally and in writing: Participated in class discussions and wrote reflection papers that translated abstract theory into applied tech ethics.
• Make decisions and solve problems: Tackled real-world dilemmas like misinformation, facial recognition bias, and AI impersonation with structured ethical reasoning.
⸻
Relevance to My Goals
My long-term goal is to lead within cybersecurity GRC or AI governance. This course reinforced that cultural context and ethical reflection are just as important as technical knowledge when shaping policy or protecting users. Whether I’m working in cloud compliance or consulting on AI risk, the ability to hold space for different perspectives—and design around them—is a core leadership skill.
This experience also pushed me to reflect more honestly on how my own background impacts how I lead, how I build, and how I connect with people in the tech world and beyond.