Part A – Short Answer Questions
1) A cybersecurity policy is a formal organizational document that outlines rules, procedures, and expectations for protecting information systems, data, and technology resources from security threats.
2) Cybersecurity policies are important because they establish consistent security standards, reduce organizational risk, protect sensitive information, ensure compliance with laws and regulations, and guide employee behavior to prevent security incidents.
3) Three cybersecurity risks of using AI tools in the workplace are:
• Data Leakage: Employees may unintentionally enter confidential or proprietary information into AI tools, exposing it outside the organization.
• Inaccurate or Manipulated Output: AI tools can produce incorrect or biased results, which may compromise decision‑making or introduce security vulnerabilities.
• Intellectual Property Exposure: AI-generated code or content may reuse copyrighted or proprietary material, creating legal and ownership risks.
4) Improper AI tool usage can violate the CIA Triad in the following ways:
• Confidentiality: Sensitive data entered into AI tools may be stored, logged, or accessed by unauthorized parties.
• Integrity: AI-generated information may be incorrect or altered, leading to corrupted data or inaccurate work.
• Availability: Overreliance on AI tools or misuse could disrupt workflows or disable systems if errors occur or tools become unreachable.
5) Acceptable use means using technology in ways that follow organizational policies, ethical guidelines, and job‑related purposes. Misuse occurs when technology is used in ways that violate policies, expose the organization to risk, or pursue personal, unethical, or unauthorized activities.
Part B – AI Usage Cybersecurity Policy
AI Usage Cybersecurity Policy
Purpose
The purpose of this policy is to establish guidelines for the safe, responsible, and secure use of artificial intelligence (AI) tools within the organization. This policy exists to protect confidential data, prevent unauthorized information from being leaked, support legal and regulatory compliance, and ensure that AI tools are used ethically and in alignment with cybersecurity best practices.
Scope
This policy applies to all employees, contractors, interns, third‑party partners, and anyone who uses company systems or accesses company data while using AI tools. It covers all AI systems, including chatbots, coding assistants, generative AI models, and productivity AI tools used for work-related tasks.
Acceptable Use
Employees may use approved AI tools for general research, brainstorming, drafting non‑sensitive content, improving productivity, generating code templates, and assisting with problem‑solving tasks. Users may input only publicly available, non‑confidential information. Employees must verify the accuracy of AI-generated content and must follow workplace policies, ethical standards, and supervisor instructions when using AI tools.
Prohibited Use
Employees are not permitted to input confidential, sensitive, personal, proprietary, or regulated data into AI tools. Users may not upload internal documents, client information, financial data, source code, trade secrets, or any information classified as restricted. AI tools must not be used to violate laws, create harmful content, plagiarize, bypass security controls, or generate deceptive or unethical material. Employees also may not rely solely on AI for decision‑making in critical processes.
Data Protection Requirements
Employees must classify data before entering it into AI tools and may only use data approved for public release. All sensitive information must remain within company‑controlled systems. When interacting with AI tools, users must follow encryption standards, authentication requirements, and least‑privilege access principles. AI-generated content must be reviewed for accuracy, security concerns, and compliance before being shared or implemented.
Enforcement and Consequences
Violations of this policy may result in disciplinary action, which can include retraining, loss of technology access, written warnings, suspension, or termination depending on severity. Intentional misuse, data leakage, or willful policy violations may result in legal consequences and reporting to regulatory authorities. The organization reserves the right to audit AI usage to ensure compliance.