PHIL 355E – Cyberconflict Module Reflection

Through your work in this module, you should have gained a robust and multifaceted understanding of cyberconflict, and gained experience using ethical principles to think through just war theory in a cybersecurity context. Next, we’ll be turning to “informational warfare,” the use of cyber resources to mislead the public and influence people.

Before going on to the next module, take a minute and write down:

  • Something about cyberconflict that makes sense to you now that didn’t before, or
  • Something about cyberconflict that you thought made sense before that you realize now does not, or
  • Something that you’re still trying to figure out about cyberconflict.

Something about cyberconflict that makes sense to me now is the need for an expanded version of Just War Theory to account for its complexities. For example, reasons such as ambiguity in who is involved with the conflict and where attacks come from. The traditional approach to warfare would have specific people attacking from a single area or location, but due to anyone having the potential to be a malicious actor and cyber-attacks coming from any place with enough technological capability, that older style isn’t effective in a new setting. Another example is how the Just War Theory cannot account for the scope of attacks. With attacks like Stuxnet, while there is a specific target, there is potential for it to spread and cause unintentional damage that is not part of the conflict. This then culminates with how new agreements on cyberconflict need to be enforced for new ethical regulations of action. Concepts such as attacks to resolve conflict and return to the status quo, and emphasis on removing, and preventing entropy within the Infosphere are necessary principles for cyberconflict. This is opposed to a traditional perspective of attacks being in retaliation, or simply out of a sense of self-satisfaction. In conclusion, this module effectively taught me about the ethical components of cyberconflict and the necessary changes that need to be implemented for proper engagement of cyber warfare.

PHIL 355E – Cyberconflict Case Study

First, read “The Real story of Stuxnet.” [backup download: stuxnet-ieee-spectrum.pdf]

Your question to answer is: Was the Stuxnet attack an act of cyberwarfare? If so, can it be ethically justified? If not, why not?

The Stuxnet cyberattack was an act of cyberwarfare. The purpose of the cyberattack was to attack an Iranian uranium enrichment facility. Therefore, an argument that due to the attack being directed to a form of infrastructure within Iran, it can be deemed as an act of war and in the context of Stuxnet, an act of cyberwarfare.

However, even if it was an act of cyberwarfare, it can’t be ethically justified due to the multiple negative consequences that resulted from it. The reason for this conclusion is that Stuxnet has several variants such as Flame, Duqu, and Gauss, which are powerful tools that are now in the hands of the public due to leakage over time. This in turn presents the opportunity for cybercriminals not only to reverse engineer them but to use their different aspects to commit cybercrimes easily. Furthermore, with its known capabilities, it could also be a blueprint for other nations to create even more complex forms of malware that can be used for cyberwarfare. Coupling this with the fact that companies are still decades behind when it comes to cybersecurity awareness, creates an optimal environment for more frequent and destructive cyberattacks that all stem from this single event.

PHIL 355E – Whistleblowing Module Reflection

Through your work in this module, you should have gained a robust and multifaceted understanding of ethical concerns about whistleblowing and loyalty, and gained experience using ethical principles to think through whistleblowing and loyalty issues in a cybersecurity context. Next, we’ll be turning to cyberconflict to talk about how cybersecurity tools can be used as weapons, and whether that is ethical. 

Before going on to the next module, take a minute and write down:

  • Something about whistleblowing and loyalty that makes sense to you now that didn’t before, or
  • Something about whistleblowing and loyalty that you thought made sense before that you realize now does not, or
  • Something that you’re still trying to figure out about whistleblowing and loyalty.

Aspects of whistleblowing and loyalty that make sense to me now that didn’t before is the idea of whistleblowing being a loyal act and that the layers of loyalty are complex with organizations. Whenever I knew about acts of whistleblowing, I perceived the whistleblower as someone who only had personal reasons to out a company in that way. While personal reasons are still a large factor as to why someone would blow the whistle, that choice may also be from their loyalty to the organization’s principles and having a desire to have it improved by having information disclosed. There are also factors such as having no other outlet for the information to be released, even if that information is of value to the public for concern or safety. With loyalty, there is flexibility in what relationship it goes to and the existing amount. In organizations, there is no obligation to be loyal, but people can be tied to the values of an organization, or simply choose to blindly follow without considering potential harms. This then ties back into how whistleblowing can be loyal due to an organization’s principles which can extend to protecting the public, but still go against an organization by outing it in this specific way.

Overall, after this module, I now understand how whistleblowing can be a moral act due to valuing the public’s concern, and how employees have varying loyalty that affects the decision to whistleblow in the first place.

PHIL 355E – Whistleblowing Case Study

First, read this article:  “Edward Snowden: the whistleblower behind the NSA surveillance revelations.”

Your question to answer is: Did Snowden do the right thing? Why or why not?

I would argue that Edward Snowden did the right thing by disclosing those documents. After working in government intelligence for almost a decade and seeing the degradation of the public’s privacy, he felt that he couldn’t stand by and let it continue. He wanted internet freedom to remain with the public, and for its value to be retained. His goal was simply to inform the public and have them question the government’s practices and mass surveillance. He even noted that the specific documents he disclosed, were ones he considered to be beneficial to public knowledge and that his goal isn’t to harm, but for there to be more transparency. Furthermore, with the money he made, and the life that he had, an easier choice would be playing a bystander role. Additionally, with his amount of access to confidential information, he could have also sold it to other nations. Yet, he felt that the public’s awareness of the government’s intentions was more valuable than all of it. Even if it came at the cost of his safety and everything he had, the public’s privacy is a right that should always be valued, and that is why Snowden’s choice to be a whistleblower against the NSA for the public was morally correct.

PHIL 355E – Professional Ethics Module Reflection

Through your work in this module, you should have gained a robust and multifaceted understanding of the professional ethical requirements in cybersecurity fields, and gained experience using ethical principles to think through professional ethics issues in a cybersecurity context. Next, we’ll be turning to whistleblowing to think through what you should do if you see something unethical. 

Before going on to the next module, take a minute and write down:

  • Something about professional ethics that makes sense to you now that didn’t before, or
  • Something about professional ethics that you thought made sense before that you realize now does not, or
  • Something that you’re still trying to figure out about professional ethics.

An aspect of professional ethics that makes sense to me now is the respect for the public’s well-being and its importance in all forms of professional ethics. Within each code of ethics, there was an emphasis on preventing harm by ensuring that work had public welfare in mind and notifying an employer or client when there are potential harms. Furthermore, according to Armstrong, if there is a clash in what is wanted for a client or employer versus the public, then the option of whistleblowing is there to inform the public of potential harm. Applying a cybersecurity context only further supports the importance of public safety and confidentiality. A crucial factor of the cybersecurity profession is not only maintaining the data for users but also being a solid form of trust that the public can depend on. That adherence to serving the public is a fundamental part of every internal and external position. However, if there is little acknowledgment of the public’s safety when in a profession, there can be drastic consequences that can threaten lives, if not further. After going through this case study, I now have a deeper understanding of how professionals have to consider the effects of their work due to the potential impact it can have on the public and the present respect for their safety.

PHIL 355E – Professional Ethics Case Study

First, watch this 20-minute TED Talk: “We’re Building a Dystopia Just to Make People Click on Ads.”

Your question to answer is: Imagine you were writing your own code of ethics. Come up with one plausible principle to include which would resolve some or several of the problems that Tufekci identifies. Explain why such a principle ought (or ought not) be included in a code of ethics. 

A plausible principle that can address the opacity of algorithms could be “Multi-Level Collaboration in the Creation of Algorithms.” This principle requires that companies need to collaborate with multiple parties when creating algorithms to ensure an ethical result for users. These entities include the company’s software designers who build the algorithm itself, ethicists who maintain moral integrity and guarantee that the algorithm is up to ethical standards, and policymakers who further ensure compliance with privacy and legal standards.

This principle should be included because it would be beneficial within a code of ethics for multiple reasons. First, there would be improved clarity on how algorithms work by forcing multiple parties to be involved in the process. This process would create an ethical blueprint for algorithms that could be used in the future. Next, the amount of data that is collected by companies would become more restricted due to the algorithm being restricted in what it could do to people. This would create a more natural environment with social media and less manipulation of people and their beliefs.

Now, while this principle would be an effective start, there would still be difficulties with the topic of transparency. For example, even though the users would be given some insight into algorithms, the propriety of algorithms would be an obstacle. The obstacle is related to companies having a choice on whether or not to share with the users how their algorithms work. However, this principle could be beneficial in tackling the issues mentioned by Tufekci by creating a better understanding of how algorithms work and generating a more transparent environment for users when they use social media.

PHIL 355E – Corporate Social Responsibility Module Reflection

Through your work in this module, you should have gained a robust and multifaceted understanding of corporate social responsibility, and gained experience using ethical principles to think through corporate social responsibility issues in a cybersecurity context. Next, we’ll be turning to professional ethics to think about what moral requirements you gain by becoming a cybersecurity professional. 

Before going on to the next module, take a minute and write down:

  • Something about CSR that makes sense to you now that didn’t before, or
  • Something about CSR that you thought made sense before that you realize now does not, or
  • Something that you’re still trying to figure out about CSR.

Something about Corporate Social Responsibility that I didn’t know about before is how crucial customers are in maintaining the social contract between businesses. Referring to Anshen, with corporations having the single purpose of only making a profit and not being accountable for any costs that they create is an old model that only creates more harm for society with things like environmental damage. An application of social responsibilities and constructive adaptations makes corporations more aware of the effect they have on everything around them and when listening to the concerns of customers, can create more beneficial change. Being aware of the customer’s desires will also bring in more people due to ethical practices.

I know now how important customers are for business and that they need to be respected for the social contract to be supported. This involves acknowledging security practices, not restricting people from having autonomy with their data or choices, and keeping them informed on what’s happening. Implementation of these types of practices can create a positive feedback loop for corporations, which is beneficial for aligning with stakeholders and societal needs. This also provides a sense of increased security that customers will be more inclined to stay with.

PHIL 355E – Corporate Social Responsibility (CSR) Case Study

First, read this article:  Is High Frequency Trading Ethical? [backup link].

Your question to answer is: Which of the four arguments that High Frequency Trading is ethical is the most convincing? Why? Which of the three arguments that HFT is unethical is the most convincing? Why? On the whole, do you think HFT is ethical or unethical?

Out of the four arguments, the most convincing argument for High Frequency Trading (HFT) being ethical would be its Reduced Costs. With HFT’s ability to create narrow bid/ask margins and reduce the cost of stocks, swaps, and bonds, smaller investors have more authority in investment and a better chance to grow. This influence creates a more equal market with improved opportunities for investing.

Conversely, I believe Market Manipulation to be the most convincing argument for why HFT is unethical. While HFT is an effective tool for the stock market, that effectiveness can be a double-edged sword by manipulating the market to create illegal advantages. In the article, Trillium Capital, an HFT firm, baited traders into investing in false trades and then canceled them when they made limit orders. Unethical practices like these are possible with HFT and create reasonable worry with investing and trading. The potential for further dangerous events in the market is also concerning due to the nature of HFT.

Due to the possibility of its natural danger, I hold the stance that HFT as it is now, is an unethical practice. Despite the benefits that HFT’s reduced costs bring for smaller investors and the market as a whole, there is greater potential in its algorithmic trading to be a tool used for illegal practice due to its effectiveness. While there are defenses like circuit breakers that restrict fast price changes in stock, more safeguards need to be put in place for more structured handling of this tool.

PHIL 355E – Data Ethics Module Reflection

Through your work in this module, you should have gained a good understanding of data ethics, and gained experience using ethical principles to think through responsible treatment of user data in a cybersecurity context. Next, we’ll be turning to corporate social responsibility to think about moral obligations that businesses have to the public.

Before going on to the next module, take a minute and write down:

  • Something about data ethics that makes sense to you now that didn’t before, or
  • Something about data ethics that you thought made sense before that you realize now does not, or
  • Something that you’re still trying to figure out about data ethics.

A part of data ethics that I knew about was consenting to the terms and conditions of applications to use them, but I never realized the several faults that exist with the current system. An example of these faults is a long document of text or overall process that organizations know consumers won’t read nor care for, which is a method for disincentivizing users to look into or think about what they consent to. In opposition, the possibility of a simplified terms and conditions agreement form is rarely used for applications that are used daily. Another example was with data mining and data analysis and how its primary method is going behind a user’s back to be used in multiple ways. Looking at the case analysis, policies like the right to be forgotten or complete opt-out processes are also a rarity for users. Examples like these prove that while I knew about consent for user data, I now realize that I didn’t know how deep its conversation could extend.

In short, this module helped me to gain a deeper understanding of the nature of consent regarding user data. It paints a picture of how the current security policies for users in place need to be changed to have users be involved and informed about what’s happening to their data.

PHIL 355E – Data Ethics Case Study

First, read this article:  “This Time, Facebook Really Might be Fucked” by Rhett Jones.  

Your question to answer is: Why is it important to protect user data? If people agree to the data protection statements of social media websites, is there still something wrong with mining their data? Why or why not?

Protecting user data is necessary for any application that uses it in any way. It’s important due to the range of identifiable information that can be involved when on any site, especially social media platforms. Some of that information includes names, pictures, interests, beliefs, home addresses, and financial data. If this information is being put into the hands of companies, then efficient security practices are crucial for preventing breaches that can lead to identity fraud or other crimes that have a large impact on people’s lives.

This impact was shown with Facebook’s scandal with Cambridge Analytica, and how they allowed the third-party company to mine the personal data of fifty million accounts for around three years without users knowing. So, to answer the question of the ethics of data mining, while users agree to data protection statements on social media sites, there is still an ethical dilemma with data mining due to the lack of transparency given to users. In the case of Cambridge Analytica, users agreed to entrust their data to Facebook, not to a third-party company that took their data easily without any pushback. To ignore the data agreement of the users by involving another party completely undermines its purpose, even if data mining is slipped into it. Furthermore, users who know that their data is being mined still need to be aware of who has access to it and what it’s being used for to maintain transparency and ethical standards.