ESXi>Graylog2 – Quickstart

This is the English version of a blog post from Raphael Schitz at hypervisor.fr.

[UPDATE] For those who want to quickly setup alarms, you need to modify the following file: /usr/share/graylog2-web/config/email.yml and add these two lines in your crontab:

su - -c 'cd /usr/share/graylog2-web;rake RAILS_ENV=production streamalarms:send>>/var/log/graylog.log'
su - -c 'cd /usr/share/graylog2-web;rake RAILS_ENV=production subscriptions:send>>/var/log/graylog.log'

Those who are using Graylog2 know how a powerful syslog server it is. And you do know as well how painful it is to install and configure. Furthermore, those who have been using it to collect ESXi logs have noticed that Graylog2 doesn’t support ESXi 5.x log format. ESXi 4.x log format are perfectly handled though. Let’s kill two birds with one stone 🙂

Simplicity wise Mick Pollard posted this summer an How-To guide on how to install and configure Graylog2 packages on Ubuntu 12.04. We will be adding some pieces from another How-To guide to make the Graylog2 web interface run under apache.

For compatibility sake, we will configure Graylog2’s listener on an alternate port, 1054 in this case. Indeed port 514 will be used for rsyslog which will ingest ESXi 5.x logs and forward them in the correct format to the Graylog2 server. Attention this is going to be fast:

echo 'deb http://ppa.lunix.com.au/ubuntu/ precise main' | sudo tee /etc/apt/sources.list.d/aussielunix.list
apt-key adv --keyserver keyserver.ubuntu.com --recv D77A4DCC
apt-get update
apt-get install mongodb elasticsearch graylog2-server graylog2-web apache2 libapache2-mod-passenger

Have a snack cause there is about 500MB to download and install…

Next we will have to configure some stuff:
/etc/graylog2.conf
syslog_listen_port = 10514

/etc/rsyslog.conf
$ModLoad immark
$ModLoad imudp
$UDPServerRun 514
$ModLoad imtcp
$InputTCPServerRun 514

/etc/rsyslog.d/50-default.conf
#*.*;auth,authpriv.none -/var/log/syslog

/etc/apache2/sites-available/default

<VirtualHost *:80>
DocumentRoot /usr/share/graylog2-web/public/
RailsEnv 'production'
usr/share/graylog2-web/public/>
Allow from all
Options -MultiViews
</Directory>

ErrorLog /var/log/apache2/error.log
LogLevel warn
CustomLog /var/log/apache2/access.log combined
</VirtualHost>

/etc/rsyslog.d/32-graylog2.conf

$template GRAYLOG2,"<%PRI%>1 %timegenerated:::date-rfc3339% %HOSTNAME% %syslogtag% - %APP-NAME%: %msg:::drop-last-lf%\n"
$ActionForwardDefaultTemplate GRAYLOG2
$PreserveFQDN on
*.*     @localhost:10514

/etc/security/limits.conf
root – nofile 64000
root – memlock unlimited

/etc/pam.d/su
session required pam_limits.so

Then shake it baby 🙂

service elasticsearch start
service mongodb restart
service graylog2-server start
service rsyslog restart
service apache2 restart

Following the format of  the messages, you may require a reverse DNS to do hostname to IP lookups. On the screenshot below you will notice logs from ESXi 5, pfSense and Astaro/UTM. We have also validated this configuration for ESXi 4FreeNAS,NTsyslogSnare/Epilog and nxlog.

Enjoy Graylog2 great features such Streams and Analytics.

Posted in ESXi, vSphere | Tagged , , , , , , | 3 Comments

Bull’s BCS Architecture – Deep Dive – Part 1

Before going further, let’s put here a list of related posts. Although not required, I encourage you to go through them all before reading the following post.

OK now let’s deep dive this BCS technology. I ended up my previous post by saying that Bull’s BCS solves scale-up issues without compromising performance. Here a graph showing what that does mean.

Bullion measured performance vs the maximum theoretical performance – Specint_rate 2006 – Courtesy of Bull

Bull’s BCS eXternal Node-Controller technology scales up almost linearly compared to the ‘glueless’ architecture. What’s the secret sauce behind this  awesome technology?

BCS Architecture

The BCS enables two key functionalities: CPU caching and the resilient eXternal Node-Controller fabric. These features server to reduce communication and coordination overhead and provide availability features consistent with Intel Xeon E7-4800 series processor.

BCS meets the most demanding requirements of today’s business-critical and mission-critical applications.

Detailed 4 Sockets Xeon E7 Novascale bullion Architecture – Courtesy of Bull

As shown in the above figure, a BCS chip sits on a SIB board that is plugged in the main board. When running in a single node mode, a DSIB (Dummy SIB) board is required.

BCS Architecture - 4 Nodes - 16 Sockets

BCS Architecture – 4 Nodes – 16 Sockets

As shown in the above figure, BCS Architecture scales to 16 processors supporting up to 160 processor cores and up to 320 logical processors (Intel HT). Memory wise, BCS Architecture supports up to 256x DDR3 DIMM slots for a maximum of 4TB of memory using 16GB DIMMs. IO wise, there are up to 24 IO slots available.

BCS key technical characteristics:

  • ASIC chip of 18x18mm with 9 metal layers
  • 90nm technology
  • 321 millions transistors
  • 1837 (~43×43) ball connectors
  • 6 QPI (~fibers) and 3×2 XQPI links
  • High speed serial interfaces up to 8GT/s
  • power-concsious design with selective power-down capabilities
  • Aggregated data transfer rate of 230GB/s that is 9 ports x 25.6 GB/s
  • Up to 300Gb/s bandwidth

BCS Chip Design – Courtesy of Bull

Each BCS module groups the processor sockets into a single “QPI island” of four directly connected CPU sockets. This direct connection provides the lowest latencies. Each node controller stores information about all data located in the processors caches. This key functionality is called “CPU caching“. This is just awesome!

More on this key functionality in the second part. Stay tuned!

Source: Bull, Spec.org

 

Posted in Bull, ESXi, Intel, Performance, VMware, vSphere | Tagged , , , , , | 6 Comments

Bull’s Implementation of a Glued Architecture

In my two previous posts, I’ve been introducing the concept of ‘glueless’ and ‘glued’ as the two main scale-up architectures. You can read them here and here. Eventually you may also read this post in the series talking about the need to go now for a scale-up approach to virtualize the last bit, that is resource-hungry business and mission critical applications.

We’ve seen that the ‘glued’ architecture is the best architecture choice to scale-up beyond 4- and 8-socket systems. We’ve also noticed that the quality of the OEM-developed eXternal Node-Controllers is critical.

Meet the Bull Coherence Switch Architecture. BCS Architecture is Bull’s implementation of the glued eXternal Node-Controller. It is the design foundation for Bullion x86 servers that need to deliver more scalability, resiliency, and efficiency to meet requirements of the most demanding applications in the business computing.

Bullion's BCS

16-socket glued architecture using Bull’s BCS technology– Courtesy of Bull

A bit of history. The BCS technology is the foundation of bullx Supernodes series of supercomputers designed to run HPC applications that require huge volume of shared resources, in particular shared memory.

Bull decided to leverage that technology into their bullion series pushing the limit of x86 enterprise-class servers to a new level.

In July 2012, the bullion server has been ranked as the world’s fastest x86 enterprise-class server, according to the international SPECint®_rate2006 benchmark.

Featuring 160 Intel® Xeon® E7 cores and 4 Terabytes of RAM, the bullion server achieved peak performance of 4,110 according to the SPECint®2006 benchmark. The fastest competitive system, HP Proliant DL980 G7, only managed a performance of 2,180.

bullion-Intel-x86

The bullion server achieved peak performance of 4,110 according to the SPECint®2006 benchmark – Courtesy of Bull

Bullion SPECint

SPECint®_rate2006 data, July 2012

These results show that not only Bull’s ‘glued’ architecture is the way to go for scale-up architecture, but also that Bull engineered a master piece of technology, that is the BCS.

Remember that one the main drawbacks of the ‘glueless’ architecture is that up to 65% of Intel QPI links bandwidth is consumed to address QPI source broadcast snoopy protocol, that is  maintaining cache coherency when socket increases. The performance increase is not linear with the number of added resources and your limited to 8-socket systems!

Bull’s BCS solves these issues and shows that you can scale up beyond 8-socket systems without compromising performance. HPC technology delivered to x86 enterprise-class servers. Thanks to the Bull’s BCS eXternal Node-Controller!

In the next blog posts we will deep dive the BCS technology and uncover the secret sauce that makes the BCS sooo awesome!

Source: Bull, Intel, Wiki, Spec.org

 

Posted in Bull, Monster VM, Performance, Uncategorized, vSphere | Tagged , , , , , | 1 Comment

Two Main Scale-Up Server Architectures – Part 2

So in my previous article we have discussed about the ‘glueless’ architecture. You may want to read part 1 before proceeding.

We have seen that ‘glueless’ architecture as some serious drawbacks. Let’s see if the second main scale-up server architecture can mitigate those issues. Meet…

The ‘glued’ architecture

We’ve seen that the ‘glueless’ architecture, coordination and communication between the processor sockets, creates a bottleneck. To overcome this problem hardware manufacturers have developed a ‘glue’ to the architecture. This ‘glued’ architecture uses external node-controllers to interconnect QPI island, that is kind of clusters of processor sockets.

Glued Architecture

Glued Architecture

Intel QPI links offers a scalable solutions based on OEM-developed eXternal Node-Controllers (referred to as XNC).  External node-controllers using the Intel Xeon E7-4800 series with embedded memory controller implies a Cache Coherent Non-Uniform Memory Access (ccNUMA) system. The role of ccNUMA is to ensure cache coherency by tracking the most up to date data is for every cache line held in a processor cache.

Latency between processor and memory in a ccNUMA system varies depending on the location of these two components in relation to each other. Manufacturers also want to minimize the bandwidth consumption resulting from the coherency snoop (Intel QPI source broadcast snoopy).

Therefore the quality of the OEM-developed eXternal Node-Controllers is critical and only a few manufacturers are able to provide server architecture which scale in pace with resources added to the system.

The next article in this series, I will focus on Bull’s eXternal Node-Controller called the BCS. Stay tuned!

Source: Bull, Intel, Wikipedia

Posted in Bull, Performance | Tagged , , | 2 Comments

Two Main Scale-Up Server Architectures – Part 1

To address the increasingly demanding workloads, processor sockets are added in a seamless way within a single server. You’re scaling up. Sockets are connected together as well as the memory and IO boards and applications can benefit from more compute power.

Refer to my first article of a series –  Scale-Out And Scale-Up Architectures – The Business-Critical Application Point Of View

There are two broad scale-up server architecture:

  • the “glueless” architecture
  • the “glued” architecture

The “glueless” architecture

The “glueless” architecture was designed by Intel. It was implemented in the Intel Xeon series E7.

When building servers above 4-sockets, they are directly connected together through the Intel QPI links.

The Intel QPI links are used to access memory, IO’s and networks as well the processors.

A “glueless” socket uses one of these 4 Intel QPI links to connect the processor socket to IO and the remaining three Intel QPI links to interconnect the processor sockets.

4-socket glueless architecture

4-socket glueless architecture – Courtesy of Bull

In a 8-socket configuration, each processor socket connects directly to three other sockets while the connection to the other four processor sockets are indirect.

8-socket glueless architecture

8-socket glueless architecture – Courtesy of Bull

The advantages of  a “glueless” architecture:

  • no requirement for specific development nor expertise from the server manufacturer. Every server makers can build a 8-socket server.
  • thus the cost of a 4-socket and 8-socket is also less

The disadvantages of a “glueless” architecture:

  • the TCO goes up when scaling out
  • limited to 8-socket servers
  • difficult to maintain cache coherency when socket increases
  • performance increase not linear
  • price/performance ratio decreases
  • efficiency not optimal when running large VMs
  • up to 65% of Intel QPI links bandwidth consumed to address QPI source broadcast snoopy protocol

What’s the issue with the Intel QPI source broadcast snoopy protocol? To achieve cache coherency, a read request must be reflected to all processor caches as a snoop.  You can compare this as doing  a broadcast on an IP network. Each processor must check for the requested memory line and provide the data if it has the most up to date version. In case the latest version is available in another cache, source broadcast snoopy protocol provides the minimum latency when memory line is copied from one cache to the next. In a source broadcast snoopy protocol, all reads result in snoops to all other caches consuming link and cache bandwidth as these snoop packets use cache cycle and link resources otherwise used for data transfers.

The primary workloads concerned by the Intel QPI source broadcast snoopy issue are:

  • Java applications
  • large databases
  • latency sensitive applications

No bottleneck should result of a scale-up approach otherwise the architecture in useless. Thus linearity of increased performance should be in line with the added resources.

Next part, we will discuss the “glued” architecture and how it can address the drawbacks of the “glueless” architecture while maintaining in line performances.

Source: Bull, Intel, Wikipedia

Posted in Bull, ESXi, Monster VM, Performance, Uncategorized, vSphere | 3 Comments

Scale-Out And Scale-Up Architectures – The Business-Critical Application Point Of View

This post is the first in a series of articles focusing on a great piece of hardware you may have seen in action at VMware Barcelona 2012 in the Solutions Exchange hall.

By the end of 2012 over 50% of the applications running on x86 platforms will be virtualized. However, currently only 20% of mission-critical applications have so far been virtualized.

Is it because IT departments do not trust virtualization platforms? Do they find virtualization platforms not stable enough to hold mission-critical applications?
Over the last decade, VMware has shown that virtualization is reality and actually virtualized applications are often more stable when running on trustworthy VMware platforms.

So if it is not a stability or trust issue, what’s the reason IT departments haven’t yet virtualized the last bit?

Scale-Out

Scale-out aka scale horizontally means to add more nodes to the infrastructure, such as adding a new host to a VMware cluster.

As computer prices drop and performance continue to increase, low cost ‘commodity’ systems are the perfect fit for scale-out approach and can be configured in large clusters to aggregate compute power.

For the last seven years designing VMware virtual environments have been preaching for a scale-out approach. One could argue with that approach and as always it depends. Pro’s are low commodity hardware price and usually few virtual machines per host are impacted whenever the commodity hardware fails. On the other side, con’s are such design requires more VMware licensing, more datacenter footprint too and usually those low cost ‘commodity’ systems have small reservoir of resources.

Scale-Up

To scale up aka to scale vertically means adding resources to a single host. Typically adding CPUs and memory to a single computer.

Usually that kind of host are beefier. They support 4-socket processors with up to 512GB of memory. Eventually you can see even beefier systems which support up to 8-socket processors and 1TB of memory. Some of us have been lucky enough to witness systems supporting up to 16-socket processors and 4TB of memory. No this is not a mainframe or such but x86 architecture based systems.

Moving to the so-called second wave of virtualization, that is providing the agility of virtualization to the business-critical applications are placing today’s Enterprise VMware clusters under enormous stress. Challenges are:

  • Inadequate scaling of computer capabilities. Support of high demanding workloads is an issue with resource limited low cost ‘commodity’ systems.
  • Insufficient reliability. Commodity hardware or hardware using ‘commodity’ components can be seen as less reliable. Reliability can be addressed with features I will talk about in the next articles.
  • Increase management complexity and operating cost. It is easier to manage 100 hosts than 1000, and from that statement, managing 10 hosts is even easier than 100. Same goes for OPEX, 10 hosts cost much less to operate than 1000 hosts.

A scale-up approach fits perfectly business-critical applications requiring huge resources. Monster VM hellooo! Those power-hungry business-critical applications such large databases, huge ERP systems, big data analytics, JAVA based applications, etc. will directly benefit from a scale-up approach.

With the introduction of VMware vSphere 5, the amount of resources available on a single VM increased fourfold compared to previous version as shown on the picture below.

And lately with the release of VMware vSphere 5.1 the monster VM beefed up one more time.

For a vSphere 5.1 Monster VM to do any work the hypervisor will have to find and schedule 64 physical CPU cores. There are very few systems out there able to hold 64 cores and even less systems capable of doing 16-socket processors, 160 cores…  Here is a hint, it starts with a B…

…To be continued!

Posted in Bull, ESXi, Monster VM, Performance, VMware, vSphere | 2 Comments

A Year Of Blogging In Summary And Season’s Greetings

2011 comes to an end and it’s time to do some introspection of this year’s blogging experience! That sounds familiar 🙂

In May 2011 I joined VMware in a permanent position. I joined a top notch team of Consultants. La crème de la crème as we say in French.

I was honored to be awarded as a vEXPERT 2011.  That’s two times in a row!

The VMware vExpert program was created in 2009 to show appreciation for those individuals who have significantly contributed to the community of VMware users over the past year. Many thanks go to the committee.

This year I also successfully passed my VCP5 certification. En route to VCAP certifications now and eventually VCDX!

Unfortunately I had to slow down my blogging activities this year. There are priorities in my life at the moment and among them are my sweet baby girl and beloved wife.

Nevertheless what would be a year of blogging without some blog site summary tables, statistics and charts ;)

Here is my 2011 top 10 posts in term of page views only. These are not necessarily my preferred blog posts though. Maybe an idea for another blog post 🙂

Title Views
Installing Oracle Database Client 10g Release 2 (10.2) on a Windows 2008 R2 x64 9,620
One Of The Most Powerful Shuttle Barebone For My VMware Home Lab 8,484
vSphere – Virtual Machine Startup and Shutdown Behavior 6,982
Microsoft Network Load Balancing (NLB) on VMware ESX 6,285
Upgrade ESXi4.0 to ESXi4.1 – The Unofficial Method 5,595
Understanding VMFS Block Size And File Size 4,850
How to increase the size of a local datastore … on an ESXi4? 4,826
How To Set Up a Trunk Port Between An ESXi4.1 And An HP ProCurve 1810g-24 4,563
Understanding disk IOPS 4,443
How To Troubleshoot a Broken RAID Volume On a QNAP Storage Device 4,363

 

Again a big thank you to all my readers.

Best Wishes and a Happy New Year 2012.

 

 

Posted in Uncategorized | Leave a comment

Cluster Profiles

This is the English version of a blog post from Raphael Schitz at hypervisor.fr.

Raphael is very smart guy, vExpert fellow and PowerCLI guru. Recently he came up with a great idea, which turned into a great blog post and a powerful script available for free. All credits go to Raphael.

No need to remind you the benefits of Host Profiles in terms of configuration consistency and correctness across the datacenter. With PXE Manager and PowerCLI, you could free yourself from the hassle of deployment and with Host Profiles’ help you automate and monitor host configuration management (These features were greatly improved in vSphere 5).

Unfortunately Cluster configuration management hasn’t improved at the same pace and remains tedious with no visibility into changes. You configure properly your Cluster settings and 6 months later, after a few maintenance windows and some changes e.g. Admission Control set to disable and DRS set to Partially Automated, you find yourself in a situation where a broken Blade powers off VM’s which are unable to restart on other hosts in the Cluster because someone forgot to re-enable HA. We have experienced this situation but hopefully our latest PowerCLI script will help us to change once for all those bad habits and behaviors: Meet Manage-ClusterProfile

Manage-ClusterProfile was developed for three simple tasks:

  • export Cluster configuration and settings to a cluster profile file.
  • compare Cluster configuration and settings with a cluster profile file.
  • import a cluster profile file to an existing Cluster.

The cluster profile file, which is a xml file, contains the entire configuration and settings of a Cluster (HA, DRS, DPM, rules, swapfile, etc…) and therefore allows a detailed comparison of similarities and differences.

Optionally you can send an email to vAdmins for instance.

The import function addresses only Cluster’s own configuration and settings. For instance, Affinity Rules or any other VM’s settings (e.g. HA/DRS/DPM customization) are not imported.

The script has the following input parameters:

  • ManagedCluster [name of the Cluster]
  • Action [import|export|check]
  • ProfilePath [directory for export|path to xml cluster profile file for import and check]
  • SendMail [1 for enable]
  • ForceImport [1 for enable]

To summarize this blog post,  this script will allow you to create new Clusters by importing cluster profile templates with all your predefined configuration and settings tailored to your own criterias.  Also ran as a scheduled task, this script will allow you to track changes and stay compliance. Of course when you make changes to your Cluster, you will have to export them to a cluster profile file that you use to track changes.

As usual do not hesitate to share your feedback and suggestions in the comment area 🙂

Enjoy !

Download Manage-ClusterProfile

Posted in ESXi, PowerCLI, VMware, vSphere | 4 Comments

A Little Sneak Peek At The Future Of Low Latency Ethernet

Look at the table below, network latency has improved far more slowly over the last three decades than other performance metrics for commodity computers.


While the 5-10μs round-trip latency seem achievable within a few
years, what about to reduce RPC latency to 1μs in the long term?

And if we just integrate NIC functionality onto the main CPU die…

Stephen M. Rumble, Diego Ongaro, Ryan Stutsman, Mendel Rosenblum (one of the co-founders of VMware), and John K. Ousterhout at the Stanford University co-authored a paper called: It’s Time for Low Latency

If you want to sneak peek at the future of low network latency, that’s definitely a paper to read 😉

Posted in Technical Papers | Tagged , , | Leave a comment

Best Practices for Performance Tuning of Latency-Sensitive Workloads in vSphere VMs

I came across this technical paper called Best Practices for Performance Tuning of Latency-Sensitive Workloads in vSphere VMs

Abstract from the paper:

This white paper summarizes findings and recommends best practices to tune the different layers of an application’s environment for latency-sensitive workloads.

If you are about to virtualize low-latency workloads or simply looking at tuning your existing virtual environment for such workloads, this is the technical paper you need read!

I like the tabulated summary at the end of the technical paper and I have pasted here a copy. Very convenient checklist.

Some of these technical papers are like diamonds. To stay on top of latest VMware technical papers and other information, create your own custom RSS feed at VMware.

Posted in Technical Papers | Tagged | 1 Comment