It is now a quite very long time we talk about new architectures for our environment.
What is leading the way, nowadays, is talking about SOA and Cloud, but what do really means for us implementing those architecture in our networks?
One of the problem I’ve noticed when talking with customers and partners is that they usually try to use the same techniques they used for the old network deployment to the new ones. this is a mistake for several reasons, but for a mere philosophical point of view make a little, if not at all, sense to apply old rules to new ideas.
So what has really changed in those approach (cloud and SOA) that will require us to shift our way to project ad deploy networks?
Let’s say there are some evident changes, first of all the topology of connection has been dramatically modified. when once we could simply think of an identity between user or service, and relative IP address this is not more possible.
The reason behind this are easily found in both client and server side of this equation.
No more physical servers location
virtualization simply change the rules of the game, braking the identity between the physical location of a server and the service provided. this is a huge change in the way we should plan and deliver our service.
The classic structure was something like that:
The service used to be provider by one or more servers with a physical defined location and IP.
The client usually shared the same configuration with a well defined physical location and a fixed IP (or an address taken form a well defined pool).
With this situation was relatively simple to define rules of access and security.
User where defined by the membership to a specific directory group (Active Directory or LDAP or …who really cares?) as well as client computer was identified and identified by it’s IP range.
From a service delivery and security perspective this was translated in two completely separated set of activities:
The owner of the network used to set delivery and security rules based on IP and MAC address, creating table to allow or block access to physical locations defined by it’s IP range. Tis way there was a sort of identity between IP structure and topology that was then copied at upper layer by software services.
The owner of the service was, at the same time, able to ignore the network structure and limit the relative security and delivery to the authentication of the requester, providing a set of different access to the different layer or services provided by the software.
This approach lead information technology for decades, ten something happened: the disruptive introduction of virtualization.
Virtualization has been a successful technology because of the promise of lower the TCO of our networks.
The original idea was to abstract the physical server from OS and application, making the physical server able to run multiple different instances.
The advantage was a standard physical layer interface seen by OS (no more drivers nightmares, bios upgrade pain and stuffs like this) and the possibility to reduce the overall number of physical devices running more instance on one Hardware.
The increasing power of hardware platforms made this approach approach successful, but at the beginning the virtualization technique was just used to hide the physical server and nothing more.
Nothing were really changed here, beside the fact that more services were running on the same physical platform.
But changing technology create new needs and so the virtual infrastructures evolved to something completely new.
Basically the abstraction layer provided by the virtual environment has been expanded in order to offer a complete abstraction from the physical layer topology. Nowadays virtual environment allow to have virtual environment running as an unique environment on different HW and different locations, at the same time the services running inside this environment are able to move from an hardware structure to another one just according the required computational needs, for the same reason instances can be created on the fly by the service layer or the virtual environment layer.
This is a radical change in the design of applications, security and networks. While before a simple IP was a good token to recognize a physical layer, a virtual one and a service one, now everything is more complex.
From a logical point of view it is clear that the problem in design is that we have multiple required connection inside the virtual environment, the entities inside the virtual environment can create complex relationship between them (think of a classic SOA implementation) as well they need to instance the physical layer.
There are obvious problems related to authentication, identity flaw control, network control and monitoring inside the virtual environment as well as the interaction with the physical environment. In a single Datacenter the physical backplane and the communication between the physical servers is usually a problem solved with datacenter specific technologies as Unified computing by cisco.
Actually the situation is a way more complex if we consider a geographical implementation as it is used to build SaaS or cloud architectures.
Different environment can be located in different datacenter able to offer a single virtual environment.
Application living in the virtual severs can be located anywhere and change location upon request or load requirement.
In this situation we add another complexity to the structure, since the virtual layer needs physical geographical connections that emulate the single virtual environment, and at the same times applications need to communicate outside and inside their virtual environment.
The physical network layer need to manage several different kinds of traffic: the communication between the virtual layer units, the communication between different services that can be in need to communicate outside the virtual environment (typical SOA requirement) and the communication with client requiring service (we’ll explode this further in a few).
This kind of situation is typical in cloud implementation where the physical location of the provided service should not influence the client experience no matter where it is.
In a typical SOA implementation we add a new level of complexity since the service provided can be generated by different unit that can be stored generated and delivered in different fashion.
This kind of complexity is hard to manage with traditional techniques. the first thing that we have to realize is that we need to extend the control inside the virtual environment and its units form a network , authentication and identity point of view.
Since the post is not strictly on SOA architecture I would not go deeper on the modules authentication and security needs and I will talk generally of some network requirements.
Any service that need to communicate with another inside or outside the the virtual environment trough a network protocol (TCPIP v4 or TCPIP v6) usually need to be provided with some sort of connection link. this can be provided by a physical switch or a virtual one running in the virtual environment. using a physical switch can be, apparently, a great solution, in terms of performances and security. this is actually a misconception for several reasons:
First of all the communication outside the virtual environment require an overload to both the service and the virtual environment, if we widen the structure in a geographical scale this overload can be barley manageable.
Second aspect to keep in mind is that some network attack in this situation are easier since the real communicator is hided by the virtual shield. impersonating a service and access data is so not a remote threat.
If the physical cannot scale well, the virtual one has, on the other side, another set of problems: resource consumptions (cpu and network latency for instance) the need to interface with the physical environment, a non matching vlan system and so on.
The problem is to overcome those limitation and keep the good from the two solutions
The solution the market is presenting nowadays is the integration between a virtual switch layer with a physical one datacenter scalable.
The idea is to have a single switch with two faces, one in the virtual world and one on the physical world. Cisco Nexus is a good example of this kind of approach.
As well as the switching similar requirement are related to firewalling. Since what happen inside a virtual environment is in a sort of black box from the outside world, keeping a security eye to check if the correct communication are in place an nothing strange happen is mandatory. Again we have a dichotomy between the physical and virtual world, the solution nowadays is to adopt a virtual firewall able to deal with internal virtual environment communications. A good example can be found again in Cisco with VSG and Virtual ASA.
Cisco VSG Security in a Dynamic VM Environment, Including VM Live Migration
Basically this kind of solutions address two needs: manage and secure virtual internal traffic, and give an interface from the physical world to the virtual one and vice versa.
Alas this is only one part of the equation, since if from one side we have the problem to control manage and deploy the services we want to provide, on the pother end we have the problem to deliver those service to someone who can use it.
Here the problem again is evolving due to several factors: the vanishing of the physical borders of our networks, the consumerization of browser capable devices, the shift in use from simple data to rich context aware multimedia contents, just to name a few.
Users try to access resources from anywhere with different devices and we are barely able to know from where they will connect to the resources.
the initial situation was relatively easy to manage, as for the server also the client were easily locable. an IP address was more than enough to build a trust relationship between client and server.
With the Datacenter consolidation the number of servers and devices growth, but again with a limited presence of remote users the location of both side were quite easy understandable. The introduction of vlan technologies, stateful inspection firewall, the use of L3L4 switches, the pervasive use of access lists were addressing (at least apparently) most of the issues.
The virtualization opened a break into this structure introducing a first layer of indetermination, virtual servers and services where not physically defined by the IP, since the could share the same physical location.
while adding complexity from “server” side, also the client side were expanding with an higher presence of remote users and the introduction of new services on the network (who does not have an IP phone nowadays?)
more devices means more network requirements, and so datacenter complexity, thanks to the virtual technology, expanded beyond the physical constrain of a single physical location. as we discussed before this lead to a series of problems that were paired with the expansion form the client side of remote and local users using different devices.
And then comes the cloud, and the final vanishing of any physical predetermined location for our client and our services.
Client and server side so evolved in an interconnect way, but network components and design were not always following this thread.
Using old fashion access lists, IP based network policies, Static VLan Assignment to manage this situation create a level of complexity that makes things unmanageable. nowadays firewalls require thousands of rules to accomplish every dingle special need, alas we have all a lot of special needs.
It’s clear to me in this situation that we need to shift from a static design to a dynamic one, able to accomplish the different needs of this evolving environment. A technology like Cisco Trustsec address those kind of requests, using SGT (Secure Group Tagging) basically dynamically assign vlan membership upon user identity, regardless IP or location, driving the packets to destination accordingly to the needs, and encrypting the network communication. To drive correctly the traffic regardless the IP is a mandatory requirement in a dynamic Cloud or SOA environment.
As important as driving correctly the network traffic there is also the need to determine witch kind of access we want to assign, we have plenty of devices like tablets, smartphones, laptop, ip phones, printers, scanners, physical security devices, medical equipment that need to access somehow our services and need to be authorized on the network. Using a Network Access service is mandatory as well to be able to correctly filter the devices, both on wireless and wired networks (think of what happened recently in Seattle to understand this kind of need). Again we can think of a cisco product like ISEto accomplish this.
Related Posts via Taxonomies
To the official site of Related Posts via Taxonomies.
Discover more from The Puchi Herald Magazine
Subscribe to get the latest posts sent to your email.
SOA, Cloud and the network–part 1 by The Puchi Herald Magazine is licensed under a Creative Commons Attribution-NonCommercial-ShareAlike 4.0 International License.