Tuesday, 29 September 2015

Five steps to optimize your firewall configuration

95% of all firewall breaches are caused by misconfiguration. Here's how to address the core problems

Firewalls are an essential part of network security, yet Gartner says 95% of all firewall breaches are caused by misconfiguration. In my work I come across many firewall configuration mistakes, most of which are easily avoidable. Here are five simple steps that can help you optimize your settings:

* Set specific policy configurations with minimum privilege. Firewalls are often installed with broad filtering policies, allowing traffic from any source to any destination. This is because the Network Operations team doesn’t know exactly what is needed so start with this broad rule and then work backwards. However, the reality is that, due to time pressures or simply not regarding it as a priority, they never get round to defining the firewall policies, leaving your network in this perpetually exposed state.

You should follow the principle of least privilege – that is, give the minimum level of privilege the user or service needs to function normally, thereby limiting the potential damage caused by a breach. You should also document properly – ideally mapping out the flows that your applications actually require before granting access. It’s also a good idea to regularly revisit your firewall policies to look at application usage trends and identify new applications being used on the network and what connectivity they actually require.

* Only run required services. All too often I find companies running firewall services that they either don’t need or are no longer used, such as dynamic routing, which typically should not be enabled on security devices as best practice, and “rogue” DHCP servers on the network distributing IPs, which can potentially lead to availability issues as a result of IP conflicts. It’s also surprising to see the number of devices that are still managed using unencrypted protocols like Telnet, despite the protocol being over 30 years old.

The solution is to harden devices and ensure that configurations are compliant before devices are promoted into production environments. This is something a lot of organizations struggle with. By configuring your devices based on the function that you actually want them to fulfil and following the principle of least privileged access – before deployment – you will improve security and reduce the chances of accidentally leaving a risky service running on your firewall.

* Standardize authentication mechanisms. During my work, I often find organizations that use routers that don’t follow the enterprise standard for authentication. One example I encountered is a large bank that had all the devices in its primary data centers controlled by a central authentication mechanism, but did not use the same mechanism at its remote office. By not enforcing corporate authentication standards, staff in the remote branch could access local accounts with weak passwords, and had a different limit on login failures before account lockout.

This scenario reduces security and creates more opportunities for attackers, as it’s easier for them to access the corporate network via the remote office. Enterprises should therefore ensure that any remote offices they have follow the same central authentication mechanism as the rest of the company.

* Use the right security controls for test data. Organizations tend to have good governance stating that test systems should not connect to production systems and collect production data, but this is often not enforced because the people who are working in testing see production data as the most accurate way to test. However, when you allow test systems to collect data from production, you’re likely to be bringing that data down into an environment with a lower level of security. That data could be highly sensitive, and it could also be subject to regulatory compliance. So if you do use production data in a test environment, make sure that you use the correct security controls required by the classification the data falls into.

* Always log security outputs. While logging properly can be expensive, the costs of being breached or not being able to trace the attack are far higher. Failing to store the log output from their security devices, or not doing so with enough granularity is one of the worst things you can do in terms of network security; not only will you not be alerted when you’re under attack, but you’ll have little or no traceability when you’re carrying out your post-breach investigation. By ensuring that all outputs from security devices are logged correctly organizations will not only save time and money further down the line but will also enhance security by being able to properly monitor what is happening on their networks.

Enterprises need to continuously monitor the state of their firewall security, but by following these simple steps businesses can avoid some of the core misconfigurations and improve their overall security posture.


Sunday, 13 September 2015

Get ready to live in a trillion-device world

A swarm of sensors will let us control our environment with words or even thoughts

In just 10 years, we may live in a world where there are sensors in the walls of our houses, in our clothes and even in our brains.

Forget thinking about the Internet of Things where your coffee maker and refrigerator are connected. By 2025, we could very well live in a trillion-device world.
[ Stay up to date on tech news with Computerworld's daily newsletters. ]

That's the prediction from Alberto Sangiovanni-Vincentelli, a professor of electrical engineering and computer science at the University of California at Berkeley.

"Smartness can be embedded everywhere," said Sangiovanni-Vincentelli. "The entire environment is going to be full of sensors of all kinds. Chemical sensors, cameras and microphones of all types and shapes. Sensors will check the quality of the air and temperatures. Microphones around your environment will listen to you giving commands."

This is going to be a world where connected devices and sensors are all around us -- even inside us, Sangiovanni-Vincentelli said in an interview with Computerworld during DARPA's Wait, what? Forum on future technology in St. Louis this week.

"It's actually exciting," he said. "In the next 10 years, it's going to be tremendous."

According to the Berkeley professor and researcher, we won't have just smartphones.

We'll have a swarm of sensors that are intelligent and interconnected.

Most everything in our environment -- from clothing to furniture and our very homes -- could be smart. Sensors could be mixed with paint and spread onto our walls.

We'll just speak out loud and information will instantly be given to us without having to do an online search, phone calls can be made or a robot could start to clean or make dinner.

And with sensors implanted in our brains , we wouldn't even need to speak out loud to interact with our smart environment.

Want something? Just think about it.

"The brain-machine interface will have sensors placed in our brains, collecting information about what we think and transmitting it to this complex world that is surrounding us," said Sangiovanni-Vincentelli. "I think I'd like to have an espresso and then here comes a nice little robot with a steaming espresso because I thought about it."

Pam Melroy, deputy director of DARPA's Tactical Technology Office, said the Berkeley professor isn't just dreaming.

"I do think there's something to that" scenario, said Melroy, who is a retired U.S. Air Force officer and former NASA astronaut. "At the very least, we should be preparing for it and thinking of what is needed. We get into very bad places when technology outstrips our planning and thinking. I'd rather worry about that and prepare for it even if it takes 20 years to come true, than just letting it evolve in a messy way."

While having a trillion-device life could happen in as little as 10 years, Sangiovanni-Vincentelli said there's a lot of work to be done to get there.

First, we simply don't have the network we'd need to support this many connected devices.

We would need communication protocols that consume very small amounts of energy and can transmit fluctuating amounts of information, the professor explained. Businesses would need to build massive numbers of tiny, inexpensive sensors. We'll need more and better security to fend off hacks to our clothing, walls and brains.

And the cloud will have to be grown out to handle all of the data that these trillion devices will create.

"Once you have the technology enabling all of this, we should be there in 10 years," said Sangiovanni-Vincentelli.

With all of these devices, many people will be anxious about what this means for personal privacy.

Sangiovanni-Vincentelli won't be one of them, though.

"Lack of privacy is not an issue," he said. "We've already lost it all... If the government wants me now, they have me. Everything is already recorded somewhere. What else is there to lose?"

Melroy also is more excited than nervous about this increasingly digital future.

"As a technologist, I don't fear technology," she said. "I think having ways that make us healthier and more efficient are a good thing... There is social evolution that happens with technological evolution. We once were worried about the camera and the privacy implications of taking pictures of people. The challenge is to make the pace of change match the social evolution."


Best Microsoft MCTS Certification, Microsoft MCITP Training at certkingdom.com