Load balancer introduction
If is the first time you heard about Load balancer you should read my previous post here. In this post, I’m going to show how to setup a simple Load balancer(http) with Nginx and HAProxy, some problems when we implement it for a system and about load balancer security.We all have already some great resouces talk about that, I just combine them and hope it helpfull for you to imagine a huge picture.
Installing Nginx, HAProxy
Installing Nginx
Ubuntu:
apt-get install nginx
Installing HAProxy
To you are using debian or Ubuntu you can choose a package suitable for your systems in here. In my case I’m using ubuntu 14.04 and I want to install latest HAProxy(1.7 as of this writing):
apt-get install software-properties-common
add-apt-repository ppa:vbernat/haproxy-1.7
apt-get update
apt-get install haproxy
For setup HAProxy on Centos you can install from default repositories:
yum -y install haproxy
Confirm HAProxy installed:
[email protected]:~$ haproxy -v
HA-Proxy version 1.7.5-1ppa1~trusty 2017/04/06
Copyright 2000-2017 Willy Tarreau <[email protected]>
Now we’re done install HAProxy.
How to configure Nginx,Haproxy load balancing
About HTTP load balancer we will have 2 common choices nginx or haproxy. Both of them are great and it depend your sytem you want to build. So now let’s start with configuration examples you can easy to create them to test using one of tools Virtualbox/Vargrant/Docker.
Nginx Load balancing configuration exmaple
In Nginx version > 1.10 go to edit file /etc/nginx.conf
sudo vim /etc/nginx/nginx.conf
The simplest configuration for load balancing with nginx may look like the following:
http {
upstream back-end {
server srv1.example.com;
server srv2.example.com;
server srv3.example.com;
}
server {
# load balancing server port
listen 80;
location / {
# point to upstream
proxy_pass http://back-end;
}
}
}
- To using NGINX with a group of servers, first, you need to define the group with the upstream directive. The directive is placed in the http context. In upsteam block config are servers we want to point, also add port if the sever not using port 80 default like following :
upstream back-end {
server srv1.example.com:9000;
}
When the load balancing method is not specifically configured, it defaults to round-robin, we can using others method like : ip_hash, least_conn.
For exmaple:
upstream back-end {
ip_hash;
server srv1.example.com:9000;
}
To keep it fast some options(Weighted load balancing, Health checks) I’m not mention here.Read the next part to find out more.
After that, Save file nginx.conf and restart nginx
sudo service nginx restart
Haproxy load balancing configuration example
HAProxy configuration at **/etc/haproxy/haproy.cfg The defautl configuration look like:
global
log /dev/log local0
log /dev/log local1 notice
chroot /var/lib/haproxy
stats socket /run/haproxy/admin.sock mode 660 level admin
stats timeout 30s
user haproxy
group haproxy
daemon
# Default SSL material locations
ca-base /etc/ssl/certs
crt-base /etc/ssl/private
# Default ciphers to use on SSL-enabled listening sockets.
# For more information, see ciphers(1SSL).
ssl-default-bind-ciphers kEECDH+aRSA+AES:kRSA+AES:+AES256:RC4-SHA:!kEDH:!LOW:!EXP:!MD5:!aNULL:!eNULL
defaults
log global
mode http
option httplog
option dontlognull
timeout connect 5000
timeout client 50000
timeout server 50000
errorfile 400 /etc/haproxy/errors/400.http
errorfile 403 /etc/haproxy/errors/403.http
errorfile 408 /etc/haproxy/errors/408.http
errorfile 500 /etc/haproxy/errors/500.http
errorfile 502 /etc/haproxy/errors/502.http
errorfile 503 /etc/haproxy/errors/503.http
errorfile 504 /etc/haproxy/errors/504.http
Within global section, we may won’t need to make any changes.
Within defauls section we have log, time out options, and load balacner mode ( tcp/http)
Load balancing configuration
* frontend – where HAProxy listens to connections
* backend – where HAProxy sends incoming connections
* starts – Optionally, HAProxy web tool for monitoring the load balaver and its nodes
For an example look like:
global
log /dev/log local0
log /dev/log local1 notice
chroot /var/lib/haproxy
stats socket /run/haproxy/admin.sock mode 660 level admin
stats timeout 30s
user haproxy
group haproxy
daemon
#Default SSL material locations
ca-base /etc/ssl/certs
crt-base /etc/ssl/private
#Default ciphers to use on SSL-enabled listening sockets.
#For more information, see ciphers(1SSL).
ssl-default-bind-ciphers kEECDH+aRSA+AES:kRSA+AES:+AES256:RC4-SHA:!kEDH:!LOW:!EXP:!MD5:!aNULL:!eNULL
defaults
log global
mode http
option httplog
option dontlognull
timeout connect 5000
timeout client 50000
timeout server 50000
errorfile 400 /etc/haproxy/errors/400.http
errorfile 403 /etc/haproxy/errors/403.http
errorfile 408 /etc/haproxy/errors/408.http
errorfile 500 /etc/haproxy/errors/500.http
errorfile 502 /etc/haproxy/errors/502.http
errorfile 503 /etc/haproxy/errors/503.http
errorfile 504 /etc/haproxy/errors/504.http
frontend localnodes
bind *:80
mode http
default_backend nodes
backend nodes
mode http
balance roundrobin
option forwardfor
http-request set-header X-Forwarded-Port %[dst_port]
http-request add-header X-Forwarded-Proto https if { ssl_fc }
option httpchk HEAD / HTTP/1.1\r\nHost:localhost
server web01 172.17.0.2 check
server web02 172.17.0.3 check
server web03 172.17.0.4 check
listen stats *:1936
stats enable
stats uri /
stats hide-version
stats auth someuser:password
Save and test:
sudo service haproxy restart
Make sure you have already servers are running and index file of each server modifed to identify which is server 01 or 02.
Nginx load balancing configuration example:
For more infomation about argument you can visit here.
Load banlancing issues
Asset management
- Static assets(images,JS,CSS) must the same acrocss servers. The main problem here is keep statics assets are succesfully replicated and are the same on each server.
- You might also benefit from using a CDN(content delivery network) like Coudflare, MaxCDN or Coudfront. Their cache your static assets, reducing the load on your server.
- If you allow user upload, the uploaded file only storage on 1 server but other servers will be not found that file. A common solution is host all static assets in a sperate location, such as Amazon’s S3. We call it is Center File Storage
Sessions
Session information if often saved on a temporary location within a web server. A user may log in, and have a session on one web server. Howerver, the load balancer may route user to another web server which doesn’t have that session information.to the user, it appers that they are no longer logged in. There are three common fixes for this.
* Cookie-based sessions
The session data are not saved to the server or any other storage, but are instead within the browser’s cookie. This way is not recommend because it’s not secure when you store data in client.
* Sticky Sessions
Also called “session affinity”. This will track which server a client was routed to and always route their request to the same web server on subsequent requests.
For example below show how to configure sticky session:
HAProxy Session Affinity:
backend nodes
# Other options above omitted for brevity
cookie SRV_ID prefix
server web01 127.0.0.1:9000 cookie check
server web02 127.0.0.1:9001 cookie check
server web03 127.0.0.1:9002 cookie check
Nginx Session Affinity:
upstream app_example {
ip_hash;
server 127.0.0.1:9000;
server 127.0.0.1:9001;
server 127.0.0.1:9002;
}
- Central session storage
Session storage is typically centralized withing an in-memory stores such as Redis or Memcached. Presistent stores such as a database are also commonly used.
Using a cache(Redis,Memcached) for session-storage lets all the web servers connect to a central session store, growing your infrastructure a bit, but leting your load load be truly distributed across all web nodes.
Lost Client Information
The problem here is if the load balacner is a proxy to your web application, it might appear to your application every request is coming from the load balancer!
If logs of HTTP request are stored form your web nodes, then you may lose important information needed when auditing your logs.
Fortunately, most load balancers provide a mechanism for giving your web servers and application this information. If you inspect the headers of a request reveived from a load balancer, you might see these included:
* X-Forwarded-For
* X-Forwarded-Host
* X-Forwared-Proto / X-Forwarded-Scheme
* X-Forwarded-Port
These header can tell you the client’s IP address, the hostname used to access the application, the schema used(HTTP vs HTTPS) and which port the client made the request on.
// JSON representation of headers sent from a load balancer
{"host":"example.com",
"cache-control":"max-age=0",
"accept":"text/html",
"accept-encoding":"gzip,deflate,sdch",
"x-forwarded-port":"80", // An x-forwarded-port header!
"x-forwarded-for":"172.17.42.1"} // An x-forwarded-for header!
SSL Traffic
In a load balancer evriroment, SSL traffic is often decrypted at the load balancer. Howerver, there’s actually a few ways to handle SSL traffic when using a load balancer
* SSL Termination
When the load balancer is responible for decrypting SSL traffic before passing request on, it’s referred to as “SSL Termination”.
The downside of SSL Termination is that the traffic between the load balancers and the web servers is not encrypted. This leaves the application open to possible man-in-the-middle attacks.
Howerver, this is a risk usally mitigated by the fact that load balancers are often within the same infrastructure(data center) as the web servers.
HAProxy SSL Termination:
frontend localhost
bind *:80
bind *:443 ssl crt /etc/ssl/xip.io/xip.io.pem
mode http
default_backend nodes
backend nodes
mode http
balance roundrobin
option forwardfor
option httpchk HEAD / HTTP/1.1\r\nHost:localhost
server web01 172.17.0.3:9000 check
server web02 172.17.0.3:9001 check
server web03 172.17.0.3:9002 check
http-request set-header X-Forwarded-Port %[dst_port]
http-request add-header X-Forwarded-Proto https if { ssl_fc }
- SSL Pass-through
In this case, the load balacner doesn’t decrypt the request, but instead passes the request though to a web server. The server than must decrypt it.
SSL Pass-through is only supported by load balancers traffic at TCP level rather than HTTP level.
That rules out Nginx for SSL pass-though, but HAProxy will happily accomplish this for you!
HAProxy SSL Pass-Through
frontend localhost
bind *:80
bind *:443
option tcplog
mode tcp
default_backend nodes
backend nodes
mode tcp
balance roundrobin
option ssl-hello-chk
server web01 172.17.0.3:443 check
server web02 172.17.0.4:443 check
Logs
Now, we have multiple web servers, but each one generates their own log files! It’s terrrible if you need to audit logs on each servers. centralizing your logs can be very beneficial.
You may wish to just get logs form the load balancer, skipping the web server logs.
The simplest ways this is to combine Logrotate’s functionality with an uploaded to an S3 bucket. This at least puts all the log files in one place that you can look into.
However, there’s plenty of centralized logging servers that you can install in your infrastructure or purchase.
Some popular self-install loggers:
* Logstash a part of popular ELK Stack
* Graylog2
* Splunk
Fault-torelant
What happen if we have problem with load balancer server ?
Here is the problem we need consider. The solution is we need use more than one load balancer server and connect them together, whenever one of servers down because any reasons your systems still working.
The post is quite long now, and the last part about Identifying Load Balancers in Penetration Testing I’ll write it after.
References
https://serversforhackers.com/so-you-got-yourself-a-loadbalancer
https://www.nginx.com/resources/admin-guide/load-balancer/
https://www.youtube.com/watch?v=mIOw4a34LCk