{"id":286,"date":"2026-03-26T16:24:08","date_gmt":"2026-03-26T16:24:08","guid":{"rendered":"https:\/\/sites.wp.odu.edu\/adrianfarmer\/?p=286"},"modified":"2026-03-26T16:24:08","modified_gmt":"2026-03-26T16:24:08","slug":"cyber-policies","status":"publish","type":"post","link":"https:\/\/sites.wp.odu.edu\/adrianfarmer\/2026\/03\/26\/cyber-policies\/","title":{"rendered":"Cyber Policies"},"content":{"rendered":"Part A \u2013 Short Answer Questions<br \/>1)\tA cybersecurity policy is a formal organizational document that outlines rules, procedures, and expectations for protecting information systems, data, and technology resources from security threats.<br \/><br \/>2)\tCybersecurity policies are important because they establish consistent security standards, reduce organizational risk, protect sensitive information, ensure compliance with laws and regulations, and guide employee behavior to prevent security incidents.<br \/><br \/><br \/>3)\tThree cybersecurity risks of using AI tools in the workplace are:<br \/>\u2022 Data Leakage: Employees may unintentionally enter confidential or proprietary information into AI tools, exposing it outside the organization.<br \/>\u2022 Inaccurate or Manipulated Output: AI tools can produce incorrect or biased results, which may compromise decision\u2011making or introduce security vulnerabilities.<br \/>\u2022 Intellectual Property Exposure: AI-generated code or content may reuse copyrighted or proprietary material, creating legal and ownership risks.<br \/><br \/><br \/>4)\tImproper AI tool usage can violate the CIA Triad in the following ways:<br \/>\u2022 Confidentiality: Sensitive data entered into AI tools may be stored, logged, or accessed by unauthorized parties.<br \/>\u2022 Integrity: AI-generated information may be incorrect or altered, leading to corrupted data or inaccurate work.<br \/>\u2022 Availability: Overreliance on AI tools or misuse could disrupt workflows or disable systems if errors occur or tools become unreachable.<br \/><br \/><br \/>5)\tAcceptable use means using technology in ways that follow organizational policies, ethical guidelines, and job\u2011related purposes. Misuse occurs when technology is used in ways that violate policies, expose the organization to risk, or pursue personal, unethical, or unauthorized activities.<br \/><br \/><br \/><br \/>Part B \u2013 AI Usage Cybersecurity Policy<br \/>AI Usage Cybersecurity Policy<br \/>Purpose<br \/>The purpose of this policy is to establish guidelines for the safe, responsible, and secure use of artificial intelligence (AI) tools within the organization. This policy exists to protect confidential data, prevent unauthorized information from being leaked, support legal and regulatory compliance, and ensure that AI tools are used ethically and in alignment with cybersecurity best practices.<br \/>Scope<br \/>This policy applies to all employees, contractors, interns, third\u2011party partners, and anyone who uses company systems or accesses company data while using AI tools. It covers all AI systems, including chatbots, coding assistants, generative AI models, and productivity AI tools used for work-related tasks.<br \/>Acceptable Use<br \/>Employees may use approved AI tools for general research, brainstorming, drafting non\u2011sensitive content, improving productivity, generating code templates, and assisting with problem\u2011solving tasks. Users may input only publicly available, non\u2011confidential information. Employees must verify the accuracy of AI-generated content and must follow workplace policies, ethical standards, and supervisor instructions when using AI tools.<br \/>Prohibited Use<br \/>Employees are not permitted to input confidential, sensitive, personal, proprietary, or regulated data into AI tools. Users may not upload internal documents, client information, financial data, source code, trade secrets, or any information classified as restricted. AI tools must not be used to violate laws, create harmful content, plagiarize, bypass security controls, or generate deceptive or unethical material. Employees also may not rely solely on AI for decision\u2011making in critical processes.<br \/>Data Protection Requirements<br \/>Employees must classify data before entering it into AI tools and may only use data approved for public release. All sensitive information must remain within company\u2011controlled systems. When interacting with AI tools, users must follow encryption standards, authentication requirements, and least\u2011privilege access principles. AI-generated content must be reviewed for accuracy, security concerns, and compliance before being shared or implemented.<br \/>Enforcement and Consequences<br \/>Violations of this policy may result in disciplinary action, which can include retraining, loss of technology access, written warnings, suspension, or termination depending on severity. Intentional misuse, data leakage, or willful policy violations may result in legal consequences and reporting to regulatory authorities. The organization reserves the right to audit AI usage to ensure compliance.<br \/>","protected":false},"excerpt":{"rendered":"<p>Part A \u2013 Short Answer Questions1) A cybersecurity policy is a formal organizational document that outlines rules, procedures, and expectations for protecting information systems, data, and technology resources from security threats. 2) Cybersecurity policies are important because they establish consistent security standards, reduce organizational risk, protect sensitive information, ensure compliance with laws and regulations, and&#8230; <\/p>\n<div class=\"link-more\"><a href=\"https:\/\/sites.wp.odu.edu\/adrianfarmer\/2026\/03\/26\/cyber-policies\/\">Read More<\/a><\/div>\n","protected":false},"author":31567,"featured_media":0,"comment_status":"open","ping_status":"open","sticky":false,"template":"","format":"standard","meta":{"footnotes":"","wds_primary_category":0},"categories":[1],"tags":[],"_links":{"self":[{"href":"https:\/\/sites.wp.odu.edu\/adrianfarmer\/wp-json\/wp\/v2\/posts\/286"}],"collection":[{"href":"https:\/\/sites.wp.odu.edu\/adrianfarmer\/wp-json\/wp\/v2\/posts"}],"about":[{"href":"https:\/\/sites.wp.odu.edu\/adrianfarmer\/wp-json\/wp\/v2\/types\/post"}],"author":[{"embeddable":true,"href":"https:\/\/sites.wp.odu.edu\/adrianfarmer\/wp-json\/wp\/v2\/users\/31567"}],"replies":[{"embeddable":true,"href":"https:\/\/sites.wp.odu.edu\/adrianfarmer\/wp-json\/wp\/v2\/comments?post=286"}],"version-history":[{"count":1,"href":"https:\/\/sites.wp.odu.edu\/adrianfarmer\/wp-json\/wp\/v2\/posts\/286\/revisions"}],"predecessor-version":[{"id":288,"href":"https:\/\/sites.wp.odu.edu\/adrianfarmer\/wp-json\/wp\/v2\/posts\/286\/revisions\/288"}],"wp:attachment":[{"href":"https:\/\/sites.wp.odu.edu\/adrianfarmer\/wp-json\/wp\/v2\/media?parent=286"}],"wp:term":[{"taxonomy":"category","embeddable":true,"href":"https:\/\/sites.wp.odu.edu\/adrianfarmer\/wp-json\/wp\/v2\/categories?post=286"},{"taxonomy":"post_tag","embeddable":true,"href":"https:\/\/sites.wp.odu.edu\/adrianfarmer\/wp-json\/wp\/v2\/tags?post=286"}],"curies":[{"name":"wp","href":"https:\/\/api.w.org\/{rel}","templated":true}]}}