We built a Protocol Surveillance tool to enhance network security. Discover how we reduced their Mean Time to Detect & Resolve breaches.
Our partner organisation is a mid-size cybersecurity organisation headquartered in the US. The firm specialises in network traffic analysis and its products help clients to have a secure experience over their networks by detecting and preventing malicious activity & security threats. It taps into the power of Machine Learning and Threat Intelligence to analyse millions of exchanges on networks and is amongst the top 10 for its robust security solutions.
Our partner organisation wanted to enhance its network monitoring solutions with a protocol surveillance tool. System administrators were trying to find a solution so that they didn’t have to manually extract and analyse activity log files generated by these protocols on the server.
Monitoring the running activities of the protocols
In today’s world where everything is happening digitally, an unreliable network can literally bring a business to a standstill. For maintaining a healthy data center, network monitoring solutions are important so that companies can have better management and control over their networks.
These solutions can give companies the flexibility to keep track of and analyse their networks for troubleshooting & detecting any mistrustful activities in real-time.
The solution came in the form of Prolance, a protocol surveillance tool that NashTech built on the Rust programming language. Let’s understand what this project enabled:
Prolance was built to monitor the activities of the following network protocols:
Once we started processing these logs, we had two scenarios – either the user requires raw logs of the protocol or logs can be filtered as per the monitoring requirements. Take, for instance, the scenario where the user is interested in obtaining only the IP addresses and the device name from thousands of DHCP logs. Filtration will be required so that only the relevant pieces of information (like Discovery, Offer Request & Acknowledge phases) are extracted as per user requirements.
Whether the logs are filtered or used raw, they go through the process of compression before being produced onto the Kafka topic. Apache Kafka has been used here for message queuing which lets you scale your processing. This happens as Kafka enables you to divide the processing over multiple consumer instances. We used the gzip compression technique and enabled the user to schedule this process as per requirement