Name: Michelle Ayaim
Date: 08/04/2025
Employer name: Gerrold Walker
Organization: NIWC Atlantic
CYSE 368: Cybersecurity Internship
A ship’s capacity to withstand cyber-attacks is now just as important as its armor and weapons, making cyber readiness an essential part of today’s naval operations. As an intern at NIWC Atlantic, I worked at the center of highly significant support and technical rigor, helping navy vessels in establishing the cyber posture required to be deployed and to remain able in high-stakes, unpredictable situations. I have a firsthand understanding of what cyber readiness takes in real life and how difficult it is to get, particularly on a ship, thanks to my experience.
As a cybersecurity intern, I was able to contribute to the readiness of multiple vessels through completing scan procedures, closing vulnerabilities, checking patch integrity, and developing tools that made the process easier for future operators. The job was direct and fast-paced. I used Security Center, Nessus, VRAM, Metbench, and Ansible to detect and remediate threats in systems running Red Hat Linux, Windows Server Update Services, and Cisco FMC platforms. From plugin updates to SIEM and ACS/RA VPH issues, each activity directly impacted the ship’s capacity to operate safely and securely in combat.
Cyber readiness is an evolving, continuous process rather than a box that is checked. Clean scan results, patched systems, recorded baselines, and staff that know how to keep that posture are all necessary for a deployable ship. We fixed real vulnerabilities, not just hypothetical ones. Each one created the risk of compromising intelligence, interfering with communications, or exposing mission systems to unauthorized access. The objective was always the same, whether I was working with SMEs to resolve recurring scan failures or building scripts to automate patching or even assist ships in obtaining permission to execute missions without being held back by cyber complications.
The internship highlighted how complicated shipboard world is. Unlike strong network configurations on land, ships have limits that force attempts at cybersecurity to be inventive. Connectivity might be poor, bandwidth limited, and technology old. Teams change often, and not everyone is well versed in the current systems or processes. Scanning and fixing must be done quickly while the ships had other maintenance scheduled so all systems had to be shutdown. These limits influenced how I worked, not just in terms of addressing technical problems, but also in terms of communicating properly.
One project that I am very fond of took place on a Landing Helicopter Dock. When I joined, the ship was struggling with hundreds of vulnerabilities and delayed scan cycles. The remediation process included plugin upgrades, Ansible scripting, ACS patches, and post-scan validation. The true problem, however, was maintaining continuity in the face of a busy staff and a live operating pace. To document and explain each stage, I used color-coded severity maps. What may have been a complicated technical procedure became something that sailors could really understand. The ship met scan requirements. Moreover, the pictures were preserved for future teams, eliminating the need for everyone to start again the following run.
Throughout my internship, I appreciated the notion that creative clarity is a force multiplier in cybersecurity. I was able to help people by using narrative and simple images, whether they were seasoned experts in the field or new team members learning difficult operations. I did not only address issues, but I also created resources. Memes became memory aids. Dashboards become operational guides. And in each case, the answer lasted beyond the moment. It was teachable. It was repeatable.
Keeping track of each remediation process increased another layer of effect. By documenting everything I had done step-by-step scanning methods, plugin sets deployed, and troubleshooting means taken. I was providing a road map for future teams. They would not have to guess what “ready” looked like, and they could modify what I had built to match changing systems. These artifacts were more than merely useful, they were necessary. They enabled cyber readiness to grow into a sustainable condition, rather than merely an internship advantage.
Personally, I recognized new meanings in this piece. The systems I interacted with were important, but so were the individuals who created them. And, while my name was tied to dashboards and remediation logs, what truly stuck with me was witnessing my efforts boost others’ confidence. That calm pride, the type that comes from knowing we clarified difficult issues and made a ship safer, is something I will keep with me. I learnt how to combine discipline with creativity, without a title, and share technical clarity in exciting ways.
My internship did more than simply prepare me to become a cybersecurity analyst; it also showed the type of analyst I aspire to be. Someone who instills calm in an obviously chaotic situation. Someone capable of both executing and educating. Someone who can take a scan’s boring facts and transform it into an effective approach, complete with visualizations that cause others to nod and say, “got it.” I not only helped with the delivery out of ships, but also with the transfer of knowledge. That is what cyber readiness requires. people who care enough to make it feasible, sustainable, and effective.
One of the most defining elements of my internship was my ability to adjust not only to systems but also to the sudden rate of change. Shipboard cybersecurity is a non-linear process. Schedules often change. The deployment priorities were revised. Locations I thought I would never visit become the new focal point of restoration operations. It was not enough to know the skills, I had to pivot quickly, adapt to different settings, and do my best work whether I was on a destroyer or working from a distant location supporting a carrier.
During my time at NIWC Atlantic, I worked on many ships at diverse levels of prep work, each with its own set of setbacks. Sometimes I came to learn about systems that were completely different from what I had previously worked on. The scans in one area had not been updated in months due to an equipment backlog. In another case, connection difficulties prevented SIEM alerts from correctly reporting severe vulnerabilities. There was outdated playbook that worked sometimes. Instead, I worked alongside senior cybersecurity analyst and specialist developed an adjustable playbook. The ability to take key remedial ideas and adjust them on the flight to meet the restrictions of each circumstance.
The ability to adapt extended to team dynamics as well. Crew members changed often, and their cybersecurity experience differed greatly. I had to quickly learn how to communicate with a variety of audiences, including those with technical backgrounds and those who were completely unfamiliar with the tools. I began adjusting my images not only for the material, but also for the setting of each ship’s crew. Illustrations that were interesting on one ship fell flat on another, so I rebuilt them. Each change provided a layer of knowledge to my strategy.
Physically, each location has increased complexity. Working on Navy ships required functioning in cramped quarters with restricted access to equipment and unreliable power supplies. I occasionally had a limited time to execute scans before networks went down, or systems were transferred to other assignments. It taught me to be precise, not just in analyzing but also in time management and work prioritizing. It also created resilience. When something did not proceed as planned thanks to physical limits, I did not pause, I just adjusted.
At one point, I helped two ships simultaneously. Each has unique plugin needs, cleanup timeframes, and scan failure patterns. I created customized tracking systems and maintained version control for each repair that was released. It took some time to establish the ability to swap contexts without sacrificing integrity, but it quickly became one of my most useful tools.
This type of flexible execution may not always be seen in final scan results or dashboards. However, it is included in the success of every system fixed in time for deployment, as well as the continuity provided by my documentation. Flexibility in cybersecurity is not a choice, it is necessary. Threats develop, team members change, and missions get more complex. In the heart of all of this, the ability to stay calm, solutions-focused, and creatively productive is what propels systems from vulnerability to validation.
Looking back, I know that the adaptation I showed was more than simply a personal capability. It was a blueprint for how operating in cybersecurity should work. We cannot only train teams how to use tools, we also need to teach them how to make changes when those tools act unexpectedly or when their surroundings fail to support them. The actual work occurs when we are not sure whether the network connection will hold, if the plugin will work together, or if the subject matter expert will be easily available.
More than once, the cyber state of a ship was dependent not just on the systems, but also on someone noticing a discrepancy between procedure and reality. On one vessel, we came across an instance where remediation was delayed caused by competing scan schedules. VRAM had documented significant vulnerabilities that did not match Nessus results, and command was not sure which report to act on. Rather than pushing the team to choose one tool over another, we created a cohesive comparison table to clarify disparities and flag false positives. It became a time of clarity in decision-making, allowing leadership to approve the appropriate action plan prior to anything else.
That minor victory led to a larger lesson. Cyber technologies are most effective when they are inspected. Numbers alone do not determine mission decision, clarification does. Patience, ingenuity, and, on occasion, improvisation is all required for interpretation. During a shift with a ship, power outages were disrupting our patching cycle. We could not guarantee a complete solution, but we could guarantee a strategy. We divided remediation tasks into scheduled pieces that corresponded to power uptime windows, and a checklist was built that the nighttime sailors could complete without me or my team. By dawn, the systems had been patched. We were not present for the last stages, but my clear handoff made it possible.
Moments like those reminded me that cybersecurity is more than simply defense. It is about operational trust. Everything done was more than simply tools. Consistency in a field that is continuously changing because of technical limits, scheduling changes, and people turnover.
The rate of change was sometimes more challenging than the pace of the ship. Shipboard cybersecurity has dozens of moving pieces, and staying ahead of issues involves fast thinking across different areas. I once worked remotely to support the vessel, relying on incomplete documentation, time-stamped logs, and broke down access. We discovered a trend of unsuccessful ACS connections that did not raise alarms. Rather than waiting for more bandwidth to get complete diagnostics, we created a simple script to ping ACS responsiveness using only the system’s local resources. The findings revealed a permissions misconfiguration buried under layers of presumed settings. It was not beautiful, but it was sufficient to resolve the problem before it worsened.
Those experiences showed me the value of making progress rather than pushing for perfection. Waiting for the best toolset or conditions is just unrealistic when ships are constantly moving, systems are deteriorating, and missions change on a regular basis. During one maintenance phase, a ship was running on badly outdated infrastructure, with plugin updates delivered inconsistently and compatibility concerns causing failure cycles. The repair method required a leap of interpretation, determining which scans were damaged owing to true problems and which were broken caused by the tool’s deterioration. I worked with senior specialists to create a brief sandbox environment on the ship’s empty server space, allowing us to test plugin batches without compromising real assets. It was not normal operating practice, but it offered us the flexibility we needed to close vulnerabilities on time.
With each mission, I improved not just my technical skills, but also my emotional awareness. People were not always working at full capacity, particularly during high-tempo missions I very quickly learned how to read the room. If the team felt overloaded, I simplified my briefings and let graphics do the demanding work. If they were aware and engaged, I explained techniques, presented innovative tactics, and solicited input. This provided each session with a pace that matched the setting, making even sophisticated fixes seem viable.
Working as a cybersecurity analyst on a Navy ship is like being the ship’s digital doctor. A cybersecurity analyst analyzes system health, finds abnormalities, and runs repairs in the same way as a physician does. During my internship with NIWC Atlantic, I learned about this similarity firsthand as I went from ship to ship doing cybersecurity assessments and implementing remediation processes. Every vessel exhibited unique symptoms, stresses, and risk thresholds.
I was not handing out medicine on these ships, instead, we delivered patches, upgrading plugins, confirming scan integrity, and addressing alarm failures using SIEM and ACS systems. But the fundamentals remained the same, examine, diagnose, treat, and document. Just as doctors prioritize patients based on severity, we address vulnerabilities based on risk level. My “code blues” highlighted critical vulnerabilities such as remote code execution dangers and improper access restrictions. They demanded rapid attention, frequently in a period of tight mission deadlines.
And, just as a mistake in medicine may lead to worse off consequences, a missed or wrong patch in cybersecurity can make systems more susceptible or disrupt mission-critical tasks. That is why I had to learn how to read scan data and grasp environmental context. For example, on a destroyer, I noticed that an obsolete plugin set was disguising several serious vulnerabilities. The first scan seemed clear, indicating a patient with no symptoms but an underlying illness. We identified the issue, performed a differential scan using updated plugins, and discovered vulnerabilities that had not been seen in weeks. It was a perfect example of “don’t treat the symptom, treat the cause.”
The Security Center is the brain behind a ship’s vulnerability management program. It collects scan data from many plugins and tools and displays it in dashboards that analysts use to analyze risk posture. Imagine it like a hospital ICU’s telemetry system, it does not treat the patient, but it does track every vital sign and alerts we to what has to be addressed.
Throughout we internship, Security Center was the primary platform for organizing vulnerability checks across onboard systems. We utilized it to schedule scan tasks, establish rules depending on mission context, and check the status of remediations after the scan. Its ability to clearly convey raw data through charts, graphs, and severity heat maps allowed us to quickly determine which areas required urgent attention.
Scan aggregation is an important benefit. SC collects scan findings from remote Nessus scanners and organizes them into a uniform perspective. This prevents activity, shows patterns over time, and enables analysts to link new vulnerabilities with previous solutions. In a shipboard setting when time and bandwidth are limited, consolidation is important.
Furthermore, SC’s ability to track plugin age aided in keeping track of plugin appearance, which is important in Navy systems where outdated scans may provide false negatives. We frequently must assess plugin freshness to ensure ships were not flying blind due to outdated scan content.
If Security Center acts as the telemetry dashboard, Nessus is the networked diagnostic device. It actively scans systems for vulnerabilities, like how a doctor performs an MRI for network health. Nessus is quick, adaptable, and extremely thorough, detecting anything from open ports to misconfigured software and missing updates. We utilized Nessus in conjunction with SC to do targeted scans on Red Hat and Windows computers aboard several Navy ships. Its lightweight construction makes it ideal to use in restricted areas. With bandwidth at a premium and time periods short particularly while systems were under repair, Nessus allows us to log in, scan, and exit.
A key feature is plugin customization. I might adapt scan tasks to the kind of ship, its mission readiness objectives, and the status of its systems. For example, on the LHD, scans were focused on ACS vulnerabilities and kernel version checks because that was where previous failures had occurred. Nessus allows us to be exact rather than redundant.
Another benefit is Credentialed scans. By authenticating with privileged access, one may go beyond obvious concerns to find deeper configuration errors. That is how we discovered the “silent vulnerabilities” settings or permissions that would not generate alarms until accessed through trusted means. And keep in mind how Nessus scanned data powered Security Center’s dashboards, which we subsequently made use of to inform SMEs and sailors. Nessus is a data engine that powers discoveries and actions.
Next is VRAM. VRAM is the Navy’s vulnerability tracking brain, storing remediation timetables, asset histories, and action logs from several systems. It is like a patient’s medical history, indicating when an issue was initially detected, when it was treated, and whether it was returned. During my internship, VRAM was important to understanding the longevity of each vulnerability. We did more than just scan and patch, we made certain that the activity was recorded, tracked, and repeated. VRAM monitored if we fix “stuck,” whether the system regressed, and whether other ships had similar problems.
One significant advantage is that remediation integrity is maintained. VRAM allows us to verify that updates were successfully distributed, and vulnerabilities were closed in official records. Our modifications helped the Navy’s readiness scores overall.
Another advantage is improved asset visibility. We could compare ships, identify which ones have repeated plugin failures, and adjust were methods accordingly. This visibility allowed us to prioritize tasks efficiently during multi-vessel support operations. We also utilized VRAM to create reports and justify actions, such as validating ACS remediation aboard the destroyer or clearing SIEM warnings on the LHD with plugin upgrades. VRAM was more than simply a tracker; it was also a paper trail and credibility enhancer.
Thirdly, Metbench is the factual exam. It confirms that scans were completed successfully, that systems were properly patched, and that vulnerabilities had been fixed. The lab certifies were patient’s wellness following therapy.
We relied on Metbench to assess scan integrity after repair. We did not just presume success after delivering patches with WSUS or scripting with Ansible, we verified it. Metbench provided we the confidence to state, “This ship is ready.” It allows us to run scans independently of SC or Nessus and identify edge-case issues such as partial patch failures or scan skips caused by configuration fluctuations.
Its interface lets us construct scan benchmarks, so when a ship passed, it was not simply “OK,” but “verified against spec.” This level of assurance was important to command choices. Deployment approval was dependent on such proof. We also utilized Metbench when systems acted unexpectedly. For example, one Red Hat server repeatedly flagged problems that did not match its patch level. Using Metbench, we established that it was a plug coordinated issue rather than a genuine risk, saving the sailors hours of useless debugging.
Moving on is Ansible. Ansible is a powerful automation tool, like a surgical machine built to make precise security operations without missing a beat. It allows us to push settings, apply fixes, and fix vulnerabilities remotely and consistently. In shipboard cybersecurity, when time is limited and systems can be temperamental, Ansible’s agentless style becomes an invaluable tool.
We used Ansible to automate repairs for ACS/RA failures and configuration error concerns. These were not one-time scripts, they were repeating playbooks that ensured uniformity across several vessels. When a ship provided a broken authentication chain, Ansible enabled us to immediately deploy a revised configuration without requiring human access to each endpoint. It saved hours, days, of firsthand troubleshooting. One key advantage of Ansible is its scalability. We might use the same playbook to manage a single Red Hat system or coordinate patches across a cruiser and a destroyer, resulting in consistent outcomes. That is significant in naval contexts where personnel rotation, bandwidth restrictions, and system fluctuations frequently disrupt continuity. With Ansible, we rebuild predictability.
Another important advantage is control over versioning and auditability. Every job Ansible does is recorded, making it simple to confirm deployment success. This signifies that fixes were not only applied but also tracked. When I assisted my team in ensuring that plugin updates were delivered prior to a scan cycle, Ansible’s logs served as proof. In cybersecurity, proof is key.
Then there’s Red Hat Linux, which was new to me after having used Kali and Ubuntu previously. Many mission-critical systems in navy networks run Red Hat Linux. It enables server environments, network appliances, and secure communication systems. Working with it takes a thorough grasp of not only Linux foundations, but also the subtleties of enterprise-level setup, security, and patching.
During my internship, Red Hat systems showed up regularly in scan cycles. We noticed a wide range of issues, including kernel mismatches, obsolete SSL protocols, and improper permission configurations. Red Hat’s modular design streamlined your work, allowing you to isolate vulnerable packages, compare scan findings to repo changes, and tailor remediation approaches.
Red Hat also provides comprehensive SELinux controls to prevent systems against privilege escalation and unwanted access. We did not only repair these systems; we made sure that permission boundaries stayed intact. When vulnerability reports revealed improperly configured More importantly, Red Hat is highly compatible. I saw it tied with the Nessus, Security Center, Ansible, and SIEM dashboards. This made it a perfect platform for creating documentation. It was not a coincidence; it was the standard. Access restrictions, you utilized the SELinux mappings to strengthen security at the policy level.
Furthermore, WSUS is an organized patch management system for Windows environments. Think of it as the circulatory system for Microsoft security updates over a network of devices. In naval environments, where assets frequently consist of both current and legacy Windows systems, WSUS becomes necessary.
Our work with WSUS included updating fixes for different Windows services on Navy ships. These were not simply cosmetic improvements; they were needed defenses against attacks aimed at SMB protocols, remote desktop vulnerabilities, and more. WSUS allowed us to selectively push authorized updates, giving us control over what was delivered and when. One main benefit is Selective targeting. We can restrict updates to selected devices, for example updating SIEM systems without affecting any other infrastructure. This was necessary during restricted scan windows or when staff wanted to maintain up time. We did not just deploy in large numbers; we also deployed tactically.
WSUS also provided compliance tracking, allowing us to determine whether and when ships got especially important safety upgrades. These dates were must during remedial reporting, particularly when plugin version mismatches caused incorrect perceptions about security posture. The WSUS records helped to explain which updates were deployed, pending, or refused.
Finally, is the Cisco FMC (Firepower Management Center). Cisco FMC is the dashboard for Cisco Firepower Threat Defense, a powerful security system and intrusion prevention system. In shipboard cybersecurity, FMC acts as a network trustee, evaluating traffic, enforcing rules, and stopping malicious behavior before it reaches end points. We worked with FMC to manage rule sets, evaluate warnings, and resolve connection difficulties, particularly when plugin versions and intrusion detection feeds were mismatched. Sometimes scan failures were caused by improper security system rules rather than endpoint issues. FMC allowed us to view the boundary behavior and triage where the disconnect was occurring.
One significant advantage of FMC is the ability to tune policies depending on real-world situations. For example, we changed access control policies when they interfered with planned Nessus scans to ensure that security tools were not mistakenly banned by security rules FMC offers you the tools to change it accordingly.
During my internship, I achieved three important competencies: technical skills across several platforms, adaptive problem-solving under limited settings, and effective communication with broad audiences. My transition from intern to confident Cyber analyst will show how cybersecurity competence grows through direct involvement with real-world systems confronting true threats.
Finally, my experience taught that successful naval cybersecurity requires a comprehensive strategy that includes technological capabilities, operational awareness, adaptable thinking, and clear communication. My focus on making cybersecurity “feasible, sustainable, and effective” now serves as a road map for my continuing growth as a cybersecurity analyst in this important field.