Quantcast
Channel: OlinData - puppet
Viewing all 32 articles
Browse latest View live

The case for upgrading from pre-puppet 3.x without roles and profiles: rewriting instead of refactoring

$
0
0

This week I was discussing with one of our customers who has been with us for a long time why we would like to move them from Puppet 2.7 and an ancient repository layout to Puppet 3, PuppetDB, the foreman, Hiera and roles and profiles. I figured more people face that decision so I decided to write this blog post about it.

The environment
Before we dive into what the situation is on the puppet side, let me quickly describe the infrastructure as that definitely has an impact on the amount of work involved in either rafactoring or rewriting.
The environment is not small, but also not super large, currently consisting of around 40 servers in the production environment. Load balanced LAMP stack on Debian wheezy with several MySQL replication clusters behind it. Some mongodb, haproxy, legacy nfs and some other auxiliary services. The whole infrastructure runs on Linode, a very nice balance between cloud, performance and VPS.

The current puppet situation
Currently this customer is on Puppet 2.7.x, without hiera and a repository layout that was started 3 years back. Puppet has come a long way since that time and a lot of best practices have changed. In addition, new pieces of technology (The foreman, PuppetDB and hiera most notably) in the eco-system and a vastly improved library of open source puppet modules make it much easier to achieve things in a manageable way. 

The question: refactor or rewrite
The question is wether we should refactor the current design into a roles and profiles design and make full use of hiera and it's capabilitis or start the whole thing from scratch. Let's look at the advantages and drawbacks a bit.

The advantages of a rewrite

  • No legacy code stays behind. We have learned a lot since we started Puppet 3 years back, and so has the community. Refactoring this codebase inevitably means carrying some of that horrible code forward while a complete rewrite makes all of that go away. This is particularly painful in a case like this where we used a lot of scope.lookupvar() calls in .erb templates. They don't work in puppet 3 and would by themselves require a decent chunk of work to remove.
  • splitting code and logic makes it easier to distribute the puppet code to developers. This client has a development office in China and clearly prefers to not be sending their mysql root passwords to offshore developers.
  • using the foreman as a frontend makes log inspection much easier. Any frontend (wether it's the Puppet Enterprise Console or the foreman) makes searching the logs and seeing some quick graphs on the state of the infrastructure much better. With the current setup, it is difficult if not impossible to move to the foreman.
  • performance upgrade. Puppet 3.x has seen serious performance improvements, which especially with an environment with heavier manifests makes a difference.
  • ease of maintenance. Not only does the code become easier to share, but a lot of logic constructs that are in the code now will behandled by hiera. Take for instance the node definitions. They look like this now:

node 'm1a.lon.example.local' inherits 'lon.basenode' {

  class { 'mmm_primary_master':
    clustername => 'example-m1'
  }

  network::interface { 'eth0':
    ensure    => present,
    ipaddress => '12.34.12.34',
    gateway   => '12.34.12.1',
    netmask   => '255.255.255.0'
  }

  network::interface { 'eth0:1':
    ensure    => present,
    ipaddress => '192.168.12.34',
    netmask   => '255.255.128.0'
  }

  Firewall<<| tag == 'ams-chain' |>>

}

node 'm1b.lon.example.local' inherits 'lon.basenode' {

  class { 'mmm_secondary_master':
    clustername => 'example-m1'
  }

  network::interface { 'eth0':
    ensure    => present,
    ipaddress => '12.34.12.35',
    gateway   => '12.34.12.1',
    netmask   => '255.255.255.0'
  }

  network::interface { 'eth0:1':
    ensure    => present,
    ipaddress => '192.168.12.35',
    netmask   => '255.255.128.0'
  }

  firewall { '490 allow primary interface public ssh on m1b':
    chain  => 'INPUT',
    action => 'accept',
    proto  => 'tcp',
    destination => $::ipaddress_eth0,
    dport  => '22',
  }

  Firewall<<| tag == 'ams-chain' |>>

}

This would be reduced to something much more modular like:

# in site.pp
node 'm1a.lon.example.local', 'm1b.lon.example.local' {
  include role::mmm::master
}

# in role/manifests/mmm/master.pp
class role::mmm::master inherits role::mmm {
    include profile::mysql::master
    include profile::mmm::agent
}

# in role/manifests/mmm.pp
class role::mmm inherits role {
    include profile::mmm
}

# in role/manifests/init.pp
class role {
    include profile::network
    include profile::firewall
}

# in profile/manifests/network.pp
class profile::network {
  # set up base network stuff
  include ::network

  # load other network interfaces from hiera
  $networks = hiera_hash('networks', undef)

  if $networks {
    create_resources('network::interface', $networks)
  } else {
    fail('No network settings found in hiera in class profile::network.')
  }
}

The drawbacks of a rewrite

  • redoing a lot of work just to get back to the same point. This effort will require probably a good 60-70 hours to get right, which is a lot of time to spend. Then again, once we have that we can then move faster in the future, and have all the advantages outlined above
  • risk of throwing out functionality. We might initially end up with functionality that doesn't work even though it works now. 

Another important point to consider is that a version upgrade will have to take place sooner or later due to support, security, functionality and/or other reasons. I prefer to do it now, and do it properly.

What are your thoughts on this? Thoughts and comments welcome!

Image courtesy of  Stuart Miles / FreeDigitalPhotos.net


Puppet Management GUI Comparison

$
0
0

One of the most asked questions during our trainings is which graphical frontend people should choose for managing their infrastructure. The answer is not an easy one, and it also keeps changing over time as the various projects release new versions. This post is an attempt to outline the major reasons for choosing one or the other.

We'll try to keep it up to date, so if you're reading this and you feel something is outdated and/or no longer correct feel free to let me know in the comments and we'll update this page asap.

Puppet Enterprise Console

Price: non-free, per node
Open Source: no
Status: Fully maintained by Puppet Labs
Official site:link

The first choice for many people. The Puppet Enterprise (PE) console has come a long way in recent years. While the first versions (anything before 3.0) contained a lot of questionable functionality, the newer versions 3.0 and 3.1 have many improvements and make this a top choice.

Discoverable Classes & Parameters

In version 3.1, one of the in my opinion very important previously missing features that was added was "Discoverable Classes & Parameters". This allows the Enterprise console to search for new classes and their class parameters within your puppet repository. Especially for larger repositories this is important functionality making administration much easier.

Live management

Some of the nice functionality in the PE Console has to do with live management which uses mostly mcollective underwater. It allows you to restart services, upgrade packages and check the status of a specific resource on a set of servers remotely.

update: here's a video outlining the functionality in Puppet Enterprise 3.1:

Puppet Open Source Dashboard

Price: free
Open Source: yes
Status: minimal community maintenance
Official site:link

This has a very simple recommendation from my side: Don't use it unless you have a very, very good reason. It's deprecated! Someone has volunteered to continue development, but judging from the commit log it doesn't seem like there's too much new development.

Puppetboard

Price: free
Open Source: yes
Status: maintained by Daniele Sluijters and some community involvement
Official site:link

This is a much younger project, and aims to only do reporting but do it very well, so non of the more interactive features found in the other projects will be supported here. An interesting project, definitely good to keep an eye on if you're just looking for visualised reports. PuppetBoard is written in Python.

Update: Here's a recording of a presentation by Daniele from PuppetCamp Amsterdam in January 2014. One of the most interesting takeaways is that it is not very scalable until the next release due to limitations in older versions of the PuppetDB API. Watch the full talk for more.

The Foreman

Price: free
Open Source: yes
Status: maintained by RedHat and a very active community
Official site:link

The foreman is the main competitor to the PE console in this space. It was started by Ohad Levy who works for RedHat in Israel. This project is developing fast and is getting more and more interesting. It uses so-called smart proxies in order to proxy functionality like TFTP and DNS off to separate servers, which is good for scalability.

It should be said that the foreman aims to do more then the PE Console. With the foreman you can provision new vm's on OpenStack, Google Compute Engine, Rackspace, AWS and a bunch of others. Once those machines come up, they can be managed with The Foreman as it's ENC and reporting backend.

update: Here's a video from the 2014 linux.conf.au conference with a good overview of the foreman:

Puppet Explorer

Price: free
Open Source: yes
Status: maintained by Spotify and some community involvement
Official site:link

This is the youngest of the PuppetDB reporting projects, and aims to only do reporting just like PuppetBoard. An interesting project, definitely good to keep an eye on if you're just looking for visualised reports. It's written in CoffeeScript and AngularJS and supports talking to multiple PuppetDB instances.

How can I use Puppet?

$
0
0

"How can I use Puppet in my company?"

This (or similar variations) is usually the most asked question when I discuss Puppet. In this post, I answer this question with examples from case studies and discussions that I have had with people using Puppet.

Let's start with a basic introduction of Puppet. Puppet is a next generation IT automation software for system administrators. Puppet lets system administrators monitor the entire infrastructure life cycle. It also helps to automate repetitive tasks, deploy critical applications and proactively manage change. 

Puppet is used in a lot of scenarios to make the lives of system administrators a lot easier. Here are a few ways how you can use Puppet in your company.

  1. Puppet for scalability - After a point of time, it becomes tough to scale your infrastructure manually. For example, your game is on Tech Crunch and people are flocking to play it and you have to bring 10-20 servers up on an hourly basis. With Puppet in place, bringing up a new server, either physical or virtual, is a piece of cake. Using Puppet Enterprise, you can clone resources using a GUI, without writing a single line of code. 
     
  2. Puppet to eliminate configuration drift - When you add a new system to your network, there is always change in configuration, unless you have the right tools in place. This may be due to some users updating the system or applying a patch with good intentions.But this might give your compliance and change management team a headache. Imagine the case when you have thousands of systems.  Puppet ensures that systems stay in the intended state by checking their integrity every 30 minutes. If Puppet finds some changes, it brings is back to the desired state.
     
  3. Puppet to provision laptops and desktops - Google as well as Github uses Boxen, a Puppet repo to manage all their Macs.
     
  4. Puppet to manage large scale infrastructure - Zynga's infrastructure consisted of thousands of servers in public clouds and private datacenters. They used a manual process of kickstarts and post installs to deploy their servers. This was fine initially, but when the deployments increased, this became a tiring job and had a bad effect on scalability. Using Puppet, Zynga manages thousands of machines, increasing the speed of deployment and recovery.
     
  5. Puppet  to replicate production environment - Puppet is best for replicating or cloning the production environment for development and testing purposes. This makes it easier to shift to a DevOps culture and reduces confusion between development and operations. Puppet and Vagrant can do wonders when working together.
     
  6. Puppet for deploying user accounts - Many companies, including ourselves use Puppet to deploy user accounts across thousands of systems.
     
  7. Puppet for deploying critical updates - Puppet's orchestration capability allows you to query your infrastructure to discover vulnerable systems and then, in a single command, schedule the required updates. At VMWare, Puppet is used to simplify security patching, eliminating the need for running routines to detect vulnerabilities and then implementing batch jobs to update effected systems.
     
  8. Puppet for provisioning cloud instances - Working with AWS or VMWare? Puppet can automate the provisioning of your instances, be it spinning of new VMs or tearing down unused ones to stay cost-effective.
     
  9. Puppet for managing super computers - At NICS, Puppet manage high performing computing systems like Nautilus (1024 cores, 4 TB shared memory and 16 GPUs) and Kreeneland (HP GPU cluster with 1440 nodes, 2.8 TB memory and 360 GPUs). Puppet is used to streamline security and ensure consistency across servers.
     
  10. Puppet for security, compliance and audit - Puppet is a great tool for helping companies enforce compliance with change management process. Puppet can be used to apply a known secure configuration across the organization. Many companies uses Puppet to enforce internal security policies and adhere to international standards such as PCI-DSS, SOX, ISO 27001, etc.

With many big companies like Apple, Bank of America, Cisco,  Disney, Google, Harvard University, HBO,  McAfee, Motorola, NASA, Rackspace and Twitter using Puppet, I'm sure you would not go wrong with it.

As my colleague and Puppet expert, Choon Ming, always says, "Puppet has limitless possibilities. All we need is some imagination." 

If you are currently using Puppet, it would be great if you could share with us what you use it for by commenting below.

Puppet Open Source or Puppet Enterprise

$
0
0

I'm sure most of the people who are using Puppet (or are about to) have thought about this at least once. When I interact with clients and prospects, I am always asked whether to go for Puppet Enterprise or Puppet Open Source. In this blog post, I'll compare both the editions so that you can make a more informed decision.

Some of the important factors to consider while differentiating both the editions are: 

Features

Depending on your needs, Puppet Enterprise is a winner here. A few useful features available in Puppet Enterprise which are not available (out of the box) in Puppet Open Source are:

  • Event inspector - to easily visualize infrastructure changes. It's very easy to see which puppet runs caused problems on what servers and why. On the Open Source version of Puppet you'll have to use something like centralised syslog combined with search patterns in order to achieve this. 
  • Automatic discovery - to discover resources and configuration across the servers. Note this is available through MCollective from the command line for the Open Source version, but it's far from ideal.
  • Orchestration - to help you deploy critical updates like patches across hundreds of servers. This is available through MCollective as well for the Open Source version of Puppet, but again it gets complicated quickly.
  • Role based access control - supports external authentication like Active Directory, LDAP, Google Apps Directory, etc. This is minor functionality for some, major for others. The Access Control is for users of the Enterprise Console, not for puppet agents or anything like that.

Bare metal Provisoning

In this area, Puppet Open Source was a better alternative than Pupet Enterprise. Bare metal provisoning (provisioning of servers from scratch) could be achieved using a combination of Puppet Open Source, Kickstart/PXE/AMI/whichever and the Foreman. Untill recently, Puppet Enterprise was not good at bare metal provisioning. However, with the release of PE 3.2, this is slowly changing. The new version of PE has Razor as a tech preview, which does a really good job at provisioning servers from scratch. Razor will also be open source (it already is), but I expect tight integration with the Enterprise Console which would make the Enterprise Console a more mature alternative to the Foreman.

VMWare Cloud Provisioning

Puppet Enterprise supports VMWare cloud provisioning out of the box. You can automatically provision and configure VMs using Puppet Enterprise and it integrates well with vSphere and vCAD.

​​Cost 

This is probably the most important factor for many companies. Puppet is free and open source software whereas Puppet Enterprise has a cost involved. Puppet Enterprise has two different licensing models:

  • Subscription - The annual subscription fee includes the license fee as well as support fees for one year. This is paid yearly.
  • Perpetual - In this mode of licensing, the license fee is paid one time. Support can be chosen as required and is renewed yearly.

GUI / Dashboard

Puppet Open Source does not come with a built-in GUI out of the box. Puppet Enterprise has a beautiful user interface, where you could get a lot done, even without using much coding. However, Puppet can be used with add on dashboards such as PuppetBoard and the Foreman. My colleague, Walter Heck, has written a useful blog post about this here.

Puppet Labs Supported Modules

Puppet Enterprise has been tried and tested by Puppet Labs engineers in complex, heterogenous enterprise environments and known to work very well. With the latest release PE 3.2, there is an initial effort to have a supported number of Puppet modules. This is a very welcome addon in an ecosystem that is currently somewhat of a "Wild West".

Support

This is a deciding factor for many companies. Puppet Open Source does not come with support, even though you could get a lot of help from the Puppet community. Puppet Enterprise has two support models - Standard and Premium support. Standard support (8x5 US timezone only) is fine for most uses, but for mission critical applications or organisations in different timezones, I would suggest to go for Premium support (24x7). You can read more about Puppet Lab's support plans here

Puppet has undergone tremendous change in the last couple of years and has become more and more powerful at what it does. Which edition to choose, depends on a lot of things like your business requirement, support, budget, etc. 

What version of Puppet do you use in your company? It would be great if you could share your views here.

 

How Puppet fits in Complex Enterprise IT Environments

$
0
0

This blog is part 1 of a 2 part series about using Puppet in Complex Enterprise Environments.

Enterprise IT environments are usually complex, heterogeneous and spread across multiple data centers. Server deployment usually takes multiple days unless the proper automation or system are in place. Configuration drift, IT compliance, agility and visibility are other challenges. To address such challenges, sys admins often prefer to go with configuration management and automation tools like Puppet, Chef, Ansible, CFengine, etc. In this blog, I will discuss Puppet. 

What is Puppet?

Puppet is a next generation IT automation software for system administrators. Puppet lets system administrators monitor the entire infrastructure life cycle. It also allows automation of repetitive tasks, deployment of critical applications and proactively change management.

In a complex enterprise IT environment, an ideal automation solution should:

  • Support multiple OS
  • Provision servers
  • Support virtualization
  • Integrate with monitoring solutions
  • Manage configuration of servers
  • Support compliance initiatives
  • Support cloud infrastructure

Now let's see how Puppet fits these roles.

SUPPORT FOR MULTIPLE OS 

Puppet supports the following operating systems. OS's marked with * support Puppet agents only. 

  • Red Hat Enterprise Linux (RHEL) 4*, 5, 6 
  • Windows* Server 2003/2008 R2/2012, and Windows* 7 
  • Ubuntu 10.04 LTS & 12.04 LTS 
  • Debian 6, 7 
  • Solaris* 10, 11 
  • SLES 11 SP1 or greater
  • Scientific Linux 4*, 5, 6 
  • CentOS 4*, 5, 6 
  • Oracle Linux 4*, 5, 6
  • AIX* 5.3, 6.1, 7.1 

Native Support for Microsoft Windows

Windows support is very important for many companies. In a lot of organisations Windows has a share of more than 75%. Puppet has recently significantly improved support for Windows. Puppet Enterprise offers native support for: 

  • Windows Server 2003, Windows Server 2008 R2, and Windows 7. 
  • Graphical installation (.msi package) or command line installation 
  • Puppet resource types: File, User, Group, Scheduled Task, Package (.msi), Service, Exec, Host 
  • Pre-Built Puppet Forge Modules - IIS, SQL Server, Azure, win_facts, windows registry, etc.

PROVISIONING NEW SERVERS 

You provision new servers up to a basic level only, using cobbler, kickstart, razor or any other provisioning tool of your choice. After that, you might go in manually and configure and set up everything else. Maybe you have scripts for it, but they are not super-flexible.

With Puppet, you integrate the setup of the Puppet agent into your provisioning process. Then, the Puppet agent runs and configures the whole server by itself. Just wait 10 minutes and the bare OS installation will have turned into a fully usable production ready machine.

Puppet & Kickstart 

When you create the OS image that goes onto the machine with kickstart you make sure that it contains the puppet agent already installed and configured to run on boot. Then when it boots the first time, it connects to puppet and you can use Puppet to have the desired configuration. In short Kickstart installs minimum to get Puppet running. For example, Puppet can convert the bare OS install into a web server or database server in minutes.

Puppet and SCCM 

System Center Configuration Manager (SCCM) brings to the table a Windows-native tool that is well-integrated with its target software and OS, capable of managing configuration from the provisioning step on up. Puppet is limited to the configuration layer only and does not descend as low as provisioning, and it doesn't come with a Windows-native GUI for setting up policies. What Puppet does differently than SCCM is offer true infrastructure-as-code configuration management. In terms of technical ability, Puppet core types and providers give a solid spread of out-of-the-box functionality that can be built on per typical Puppet practice to fashion larger abstractions either within the Puppet language or in Ruby. Puppet is explicitly designed to be a highly extensible framework, therefore additional resource types are easy to write, distribute, or find on the Puppet Forge. All of this, combined with the significantly lower per-node price, make Puppet Enterprise a compelling choice for hybrid Windows/Unix/Linux IT environments, an agile alternative to SCCM, and a tool complementary to Group Policy.

Puppet & Razor

Razor is an advanced provisioning application that can deploy both bare metal and virtual systems. Razor makes it easy to provision a node with no previously installed operating system and bring it under the management of Puppet Enterprise. Razor’s policy-based bare-metal provisioning enables you to inventory and manage the lifecycle of your physical machines. With Razor, you can automatically discover bare-metal hardware, dynamically configure operating systems and/or hypervisors and hand nodes off to Puppet Enterprise for workload configuration. Razor policies use discovered characteristics of the underlying hardware and on user-provided data to make provisioning decisions.

The following steps show a high-level view of provisioning a node with Razor:

  1. Razor identifies a new node - When a new node appears, Razor discovers its characteristics by booting it into the Razor microkernel and inventorying its facts.
  2. The node is tagged - The node is tagged based on its characteristics. Tags contain a match condition — a Boolean expression that has access to the node’s facts and determines whether the tag should be applied to the node or not.
  3. The node tags match a Razor policy - Node tags are compared to tags in the policy table. The first policy with tags that match the node’s tags is applied to the node.
  4. Policies pull together all the provisioning elements
  5. The node is provisioned with the designated OS and managed with PE

VIRTUALIZATION

System administrators face numerous challenges in today's virtualized world. VM sprawl, configuration drift, and the increasingly heterogeneous nature of IT environments - public, private, hybrid cloud platforms, multiple operating systems, new application stacks - make managing infrastructure even more complex. In addition, organizations' expectations for rapid response times and fast delivery of applications only seem to increase Using Puppet's declarative, model-based approach to IT automation, system administrators can take full advantage of the responsiveness of their VMware deployments without any loss in productivity. Furthermore, Puppet's abstraction layer enables sysadmins to reuse their configurations across physical, virtual, and cloud environments as well as operating systems, databases, and application servers. Sysadmins can benefit from using Puppet Labs and VMware integrations for configuring VMs and provisioning private cloud applications

Puppet & vSphere / ESXi

The sheer volume and dynamic nature of nodes makes managing the lifecycle of VMware virtual machines a challenge. In particular, keeping configurations consistent across dev, test, and prod environments while rapidly provisioning, configuring, updating, and terminating VMs requires automation in order to scale without impacting quality of service. Puppet Enterprise can help.

With its integration with VMware vCenter, Puppet Enterprise enables sysadmins to provision VMware VMs and automatically install, configure, and deploy applications like web servers and databases. Furthermore, these declarative, model-based configurations are reusable across operating systems, dev-test-prod environments, and even physical and public cloud infrastructures. Using Puppet Enterprise, IT teams can automate away the menial, repetitive tasks around lifecycle management of their VM infrastructure, allowing them to scale services and applications quickly, reliably, and efficiently.  

ENTERPRISE MONITORING

Out–of-the-box Puppet Enterprise contextual dashboards Leveraging the functionality of PuppetDB in Puppet Enterprise, you can centrally monitor advanced features such as inventory services and exported resources. This large inventory of metadata for each node can help sysadmins optimize their deployments and report on expected or unexpected behaviour. Puppet’s integration with products like ScienceLogic empower IT administrators to standardize, automate change and manage policies, while simultaneously ensuring the performance and availability of their systems and Puppet Enterprise deployments. This combined solution enables Puppet Enterprise customers to discover, configure, manage and monitor their dynamic infrastructure, especially in larger distributed environments. The integration includes support for automated discovery of all Puppet Enterprise resources. Aligning these resources in device categories and groups enables you to apply different KPIs and events to different classes of service. For example: identifying the top 10 most resource-consuming Puppet nodes per environment. Monitoring can also be done using external tools like Nagios which works quite well with Puppet.

In the next blog post in the series we shall discuss the following:

  • Puppet for configuration management
  • Puppet for compliance
  • Puppet for automation of cloud infrastructure

Clojure: an outsider's investigation

$
0
0

Last week, this post on the Puppet Labs blog caught my eye. It announces a services framework called TrapperKeeper, which seems interesting. To be honest I haven't looked into what it does and how it does the things it does.

I did however spend a bit of time investigating clojure as well as the community response to this announcement. I'll share my thoughts here. I do have to warn that this is all found through creative surfing, so welcome to how my mind works when investigating a (to me) new piece of open source technology.

Clojure

I started by looking at Clojure. Not so much at what the language can do and it's syntax and all that, since a) my programming days are (sadly) mostly over and b) there are far smarter minds that can say sensible things about that.

I am however increasingly interested in the continuity of technologies, as this seems to be an important thing in order for enterprises to adopt them. This in turn helps me to decide wether we should look into offering training for those technologies. So, I dug into the information that is publically and easily accessible:

  • The GitHub contributor stats page: as of this moment, the vast majority of the commits (1600+ vs 200+ by Stuart Halloway, the runner up) are done by Rich Hickey, the original author. In the past 3 months however, Alex Miller has the lead (stats here), indicating a possible shift of attention for Rich. Of course this is pure speculation and I don't claim to have any inside knowledge here, remember this is just an outsider's perspective.
  • Let's dig into which companies are behind those top contributors: This gives me a decent feeling. There's not a lot of commits going on to the clojure core repository, but control doesn't seem to be resting with a single company. That said, the main contributor is a professional services company, so people can turn somewhere if they want support.
    • 4 out of the top 6 are from Cognitect, a company fully focussed on Clojure and Datomic. Doing a bit of reading, they seem to be the "good kind" of open source company. Minor downside: the company is quite young.
    • the other 2 top contributors are Toby Crawley who works for Red Hat and Andy Fingerhut who according to a quick LinkedIn search works for Cisco. Good, two major enterprises who at the very least have people working on this. Toby's site clearly states he works on this professionally, for Andy this is less clear.
  • Going through profiles of the main contribs, I found an interesting blog post by Alex Miller. This blog summarizes the 2013 State of Clojure Survey. Inside it, we find some interesting nuggets:
    • Tooling is the biggest category of complaints. This is interesting, because it directly conflicts with what the Puppet Labs blog post lists as a good reason to go for the JVM. It seems like the culprit is "the relationship between Clojure code and bytecode is complex and not necessarily 1-to-1 – getting good s-expression level support is challenging". I don't pretend to know better then either party, but I am cautioned by this. Anyone who can shed a light on this is welcome to leave a comment.
    • The other big category of problems is documentation. That is the same thing we can read in the HackerNews discussion. Having spent a decent bit of time with half-assed documentation in the MySQL HA scene, I am not super thrilled when reading this.
  • Comments on the Puppet Labs blog post itself:
    • 'Engineer' said: "It's sad that the level of technical ability is so low that people adopt languages like clojure because they think its "concurrent". The JVM is not concurrent, therefore clojure is not concurrent therefore you're just signing up for a world of hurt. The Erlang VM has been around longer than the JVM, it really is concurrent, and it really is battle tested."
      Sadly, I have no idea if this is true, and without much further investigation it's hard to verify. I just see it as a possible red flag that I'd want to dig into later on.
    • 'Jeff Dickey' said: "First clue: the "we're so awesome we have to build our own infrastructure even when we're probably complete n00bs in our new hipster language" syndrome. Most of the dings that were laid out as "justification" were *operational nice-to-haves*; if your new environment isn't mature enough to have rock-solid operational support (and anything on the JVM really *should*), then you are fundamentally misunderstanding something."
      This doesn't pertain to clojure so much, but to the fact that Puppet Labs created TrapperKeeper. While the language used here is not my favorite, I am concerned about the underlying point: This framework is obviously not Puppet Labs' core business. While important for their products, I wonder wether it's a great long-term plan to build this stuff in-house (and thus spend resources on this). I guess the longer term will have to determine, mostly by wether the project will get outside contributors/contributions. Irrelevant but ironical: discussing this issue in this specific case, given that puppet was created to counter an exactly similar problem of everyone creating their own tooling in-house :)

That's all for now. It's too early to tell what this all means in the grander scheme of the Puppet ecosystem and where this will all lead in the next few years. Personally I'm not happy with the JVM from an operational perspective as it's startup time and memory usage are a bit of a turnoff. That said, PuppetDB has been a major step forward since Stored Configurations in Puppet 3.x, so I'm just going to sit back and digest all my newly found knowledge while waiting to see where this is all going.

Managing Percona Xtradb Cluster with Puppet

$
0
0

Last month I spoke at the Percona Live conference about MySQL and puppet. There was a lot of interest in the talk, so I figured I'd write a blog post about it as well. I used the galera module I wrote as an example in the session, so this post will be specifically about galera.

Prerequisites

Setting up virtualbox

We have used specific network settings for virtualbox in our vagrant file, so we'll need to make sure it's configured properly. Inside VirtualBox, go to preferences -> network -> Host Only Networks (on a Mac, may be different on other host OSes. Edit vboxnet0, or add it if the list is empty. Use the following settings to make sure your vm's will be using the ip's defined in the vagrantfile:

Adapter tab:

  • IPv4 Address: 192.168.56.1
  • IPv4 Network Mask: 255.255.255.0
  • IPv6 Settings can stay on their defaults

DHCP tab:

  • tick "Enable server"
  • Server Address: 192.168.56.100
  • Server Mask: 255.255.255.0
  • Lower Address Bound: 192.168.56.101
  • Upper Address Bound: 192.168.56.254

DHCP is not strictly needed (we set static ips in the vagrantfile), but if you add other servers to your testing later on, it's convenient to have them in the same subnet.

Getting the puppet master up and running

Now that virtualbox is ready for action, lets grab the code and fire up the puppet master with vagrant:

$ git clone https://github.com/olindata/olindata-galera-puppet-demo.git olindata-galera-demo
$ cd olindata-galera-demo/vagrant
$ vagrant up master
[..Wait a few minutes, grab coffee and read the rest of this post..]

Note that the vagrant up command throws errors here and there, but they are okay as they are corrected later in the master_setup.sh script. To check that everything completed, log into the master and check for which process is listening on port 8140. This should be httpd. In addition, a puppet agent -t run should complete without problems:

$ vagrant ssh master
[vagrant@master ~]$ sudo su -
[root@master ~]# netstat -plant | grep 8140
tcp        0      0 :::8140                     :::*                        LISTEN      5305/httpd
[root@master ~]# puppet agent -t
Info: Retrieving pluginfacts
Info: Retrieving plugin
Info: Loading facts in /var/lib/puppet/lib/facter/facter_dot_d.rb
Info: Loading facts in /var/lib/puppet/lib/facter/root_home.rb
Info: Loading facts in /var/lib/puppet/lib/facter/concat_basedir.rb
Info: Loading facts in /var/lib/puppet/lib/facter/puppet_vardir.rb
Info: Loading facts in /var/lib/puppet/lib/facter/pe_version.rb
Info: Loading facts in /var/lib/puppet/lib/facter/etckepper_puppet.rb
Info: Caching catalog for master.olindata.vm
Info: Applying configuration version '1397908319'
Notice: Finished catalog run in 5.30 seconds

If the output of the commands is as shown above, the Puppet master is now ready for the agents to be brought up.

Bringing up the galera nodes

First node

The galera puppet module is quite nice, but it has one big caveat at the moment: bootstrapping a cluster. The problem is that when puppet runs on a node, it has a hard time figuring out if that node is the first node in a cluster (and thus needs to be bootstrapped) or if it is joining a cluster. A solution to this would be to write a little script that checks all the nodes in the wsrep_cluster_address variable to see if they are already up, but that is neither very nice (we're trying to prevent needing that in Puppet) nor implemented at present.

Since the majority of the times we'll be adding nodes to an already existing cluster, we have opted for that to be the default with the Galera module. This in turn means that for this demo we need to bring up one vm first, bootstrap galera on it and then bring up the other nodes. (Note: Elegant solutions to this problem welcome in the comments!)

Let's start by bringing up the vm and ssh'ing in as root:

$ vagrant up galera000
Bringing machine 'galera000' up with 'virtualbox' provider...
==> galera000: Importing base box 'debian-73-x64-virtualbox-puppet'...
==> galera000: Matching MAC address for NAT networking...
==> galera000: Setting the name of the VM: vagrant_galera000_1397908792529_17411
==> galera000: Fixed port collision for 22 => 2222. Now on port 2200.
==> galera000: Clearing any previously set network interfaces...
==> galera000: Preparing network interfaces based on configuration...
    galera000: Adapter 1: nat
    galera000: Adapter 2: hostonly
==> galera000: Forwarding ports...
    galera000: 22 => 2200 (adapter 1)
==> galera000: Booting VM...
==> galera000: Waiting for machine to boot. This may take a few minutes...
    galera000: SSH address: 127.0.0.1:2200
    galera000: SSH username: vagrant
    galera000: SSH auth method: private key
    galera000: Error: Connection timeout. Retrying...
==> galera000: Machine booted and ready!
==> galera000: Checking for guest additions in VM...
==> galera000: Setting hostname...
==> galera000: Configuring and enabling network interfaces...
==> galera000: Mounting shared folders...
    galera000: /vagrant => /Users/walterheck/Source/olindata-galera-demo/vagrant
==> galera000: Running provisioner: shell...
    galera000: Running: /var/folders/4x/366j5zl15b1b4z7t6l7jf6zw0000gn/T/vagrant-shell20140419-2728-1wfc0hm
stdin: is not a tty
$ vagrant ssh galera000
Linux vagrant 3.2.0-4-amd64 #1 SMP Debian 3.2.51-1 x86_64
Last login: Wed Feb  5 12:49:09 2014 from 10.0.2.2
vagrant@galera000:~$ sudo su -
root@galera000:~#

Next up, we run puppet agent on it. Note that since we have autosigning turned on on the puppetmaster, the first run doesn't need to wait for a signed certificate. The puppet run will have some errors, but we can live with that:

root@galera000:~# puppet agent -t

In the output (too much to display here), you'll see red lines that complain about not being able to start mysql:

Error: Could not start Service[mysqld]: Execution of '/etc/init.d/mysql start' returned 1:
Error: /Stage[main]/Mysql::Server::Service/Service[mysqld]/ensure: change from stopped to running failed: Could not start Service[mysqld]: Execution of '/etc/init.d/mysql start' returned 1:
Error: Could not start Service[mysqld]: Execution of '/etc/init.d/mysql start' returned 1:
Error: /Stage[main]/Mysql::Server::Service/Service[mysqld]/ensure: change from stopped to running failed: Could not start Service[mysqld]: Execution of '/etc/init.d/mysql start' returned 1:

This is not actually true, when you check for the mysql process after the puppet run it's there:

root@galera000:~# ps aux | grep mysql
root      9881  0.0  0.0   4176   440 ?        S    05:05   0:00 /bin/sh /usr/bin/mysqld_safe
mysql    10209  0.2 64.8 830292 330188 ?       Sl   05:05   0:00 /usr/sbin/mysqld --basedir=/usr --datadir=/var/lib/mysql --plugin-dir=/usr/lib/mysql/plugin --user=mysql --log-error=/var/lib/mysql/galera000.err --pid-file=/var/run/mysqld/mysqld.pid --socket=/var/run/mysqld/mysqld.sock --port=3306 --wsrep_start_position=00000000-0000-0000-0000-000000000000:-1
root     12456  0.0  0.1   7828   872 pts/0    S+   05:11   0:00 grep mysql

Let's kill mysql first :

root@galera000:~# pkill -9ef mysql
root@galera000:~# ps aux | grep mysql
root     12475  0.0  0.1   7828   876 pts/0    S+   05:12   0:00 grep mysql

Next up, we bootstrap the cluster:

root@galera000:~# service mysql bootstrap-pxc
[....] Bootstrapping Percona XtraDB Cluster database server: mysqld[....] Please take a l[FAILt the syslog. ... failed!
 failed!

Somehow this thinks it failed, but it didn't. To make sure it worked, log into mysql and check the status of the wsrep_cluster_* status variables. It should look something like this:

mysql> show global status like 'wsrep_cluster%';
+--------------------------+--------------------------------------+
| Variable_name            | Value                                |
+--------------------------+--------------------------------------+
| wsrep_cluster_conf_id    | 1                                    |
| wsrep_cluster_size       | 1                                    |
| wsrep_cluster_state_uuid | 7665992d-bc38-11e3-a2c4-9aefb5dea18a |
| wsrep_cluster_status     | Primary                              |
+--------------------------+--------------------------------------+
4 rows in set (0.00 sec)

Now that mysql is properly bootstrapped, we can run the puppet agent one more time and see it completely properly now:

root@galera000:~# puppet agent -t
Info: Retrieving plugin
Info: Loading facts in /var/lib/puppet/lib/facter/etckepper_puppet.rb
Info: Loading facts in /var/lib/puppet/lib/facter/pe_version.rb
Info: Loading facts in /var/lib/puppet/lib/facter/concat_basedir.rb
Info: Loading facts in /var/lib/puppet/lib/facter/puppet_vardir.rb
Info: Loading facts in /var/lib/puppet/lib/facter/facter_dot_d.rb
Info: Loading facts in /var/lib/puppet/lib/facter/root_home.rb
Info: Caching catalog for galera000.olindata.vm
Info: Applying configuration version '1398437502'
Notice: /Stage[main]/Xinetd/Service[xinetd]/ensure: ensure changed 'stopped' to 'running'
Info: /Stage[main]/Xinetd/Service[xinetd]: Unscheduling refresh on Service[xinetd]
Notice: /Stage[main]/Mcollective::Server::Config::Factsource::Yaml/File[/etc/mcollective/facts.yaml]/content:
--- /etc/mcollective/facts.yaml 2014-04-25 08:17:39.000000000 -0700
+++ /tmp/puppet-file20140425-17657-1jhcy3j  2014-04-25 08:23:25.000000000 -0700
@@ -63,7 +63,7 @@
   operatingsystemmajrelease: "7"
   operatingsystemrelease: "7.3"
   osfamily: Debian
-  path: "/usr/bin:/bin:/usr/sbin:/sbin"
+  path: "/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
   physicalprocessorcount: "1"
   processor0: "Intel(R) Core(TM) i7-3520M CPU @ 2.90GHz"
   processorcount: "1"

Info: /Stage[main]/Mcollective::Server::Config::Factsource::Yaml/File[/etc/mcollective/facts.yaml]: Filebucketed /etc/mcollective/facts.yaml to puppet with sum 3a6aabbe41f4023031295a8ac3735df3
Notice: /Stage[main]/Mcollective::Server::Config::Factsource::Yaml/File[/etc/mcollective/facts.yaml]/content: content changed '{md5}3a6aabbe41f4023031295a8ac3735df3' to '{md5}227af082af9547f423040a45afec7800'
Notice: /Stage[main]/Mysql::Server::Root_password/Mysql_user[root@localhost]/password_hash: defined 'password_hash' as '*55070223BD04C680F8BD1586E6D12989358B4B55'
Notice: /Stage[main]/Mysql::Server::Root_password/File[/root/.my.cnf]/ensure: defined content as '{md5}af3f5d93645d29f88fd907e78d53806b'
Notice: /Stage[main]/Galera::Health_check/Mysql_user[mysqlchk_user@127.0.0.1]/ensure: created
Notice: /Stage[main]/Galera/Mysql_user[sst_xtrabackup@%]/ensure: created
Notice: /Stage[main]/Galera/Mysql_grant[sst_xtrabackup@%/*.*]/privileges: privileges changed ['USAGE'] to 'CREATE TABLESPACE LOCK TABLES RELOAD REPLICATION CLIENT SUPER'
Notice: Finished catalog run in 4.65 seconds

Now that this is done, we're ready to move on to the other nodes.

Subsequent nodes

Next, we bring up the other three vagrant nodes. The output from vagrant up will look like this:

$ vagrant up galera001
Bringing machine 'galera001' up with 'virtualbox' provider...
==> galera001: Importing base box 'debian-73-x64-virtualbox-puppet'...
==> galera001: Matching MAC address for NAT networking...
==> galera001: Setting the name of the VM: vagrant_galera001_1398437027038_8689
==> galera001: Fixed port collision for 22 => 2222. Now on port 2201.
==> galera001: Clearing any previously set network interfaces...
==> galera001: Preparing network interfaces based on configuration...
    galera001: Adapter 1: nat
    galera001: Adapter 2: hostonly
==> galera001: Forwarding ports...
    galera001: 22 => 2201 (adapter 1)
==> galera001: Booting VM...
==> galera001: Waiting for machine to boot. This may take a few minutes...
    galera001: SSH address: 127.0.0.1:2201
    galera001: SSH username: vagrant
    galera001: SSH auth method: private key
    galera001: Error: Connection timeout. Retrying...
    galera001: Error: Remote connection disconnect. Retrying...
    galera001: Error: Remote connection disconnect. Retrying...
    galera001: Error: Remote connection disconnect. Retrying...
    galera001: Error: Remote connection disconnect. Retrying...
    galera001: Error: Remote connection disconnect. Retrying...
    galera001: Error: Remote connection disconnect. Retrying...
    galera001: Error: Remote connection disconnect. Retrying...
    galera001: Error: Remote connection disconnect. Retrying...
    galera001: Error: Remote connection disconnect. Retrying...
    galera001: Error: Remote connection disconnect. Retrying...
    galera001: Error: Remote connection disconnect. Retrying...
    galera001: Error: Remote connection disconnect. Retrying...
==> galera001: Machine booted and ready!
==> galera001: Checking for guest additions in VM...
==> galera001: Setting hostname...
==> galera001: Configuring and enabling network interfaces...
==> galera001: Mounting shared folders...
    galera001: /vagrant => /Users/walterheck/Source/olindata-galera-demo/vagrant
==> galera001: Running provisioner: shell...
    galera001: Running: /var/folders/4x/366j5zl15b1b4z7t6l7jf6zw0000gn/T/vagrant-shell20140425-19022-fiys2z
stdin: is not a tty

Do the same for galera002 and galera003, then log into galera001 and run puppet agent -t:

$ vagrant ssh galera001
Linux vagrant 3.2.0-4-amd64 #1 SMP Debian 3.2.51-1 x86_64

The programs included with the Debian GNU/Linux system are free software;
the exact distribution terms for each program are described in the
individual files in /usr/share/doc/*/copyright.

Debian GNU/Linux comes with ABSOLUTELY NO WARRANTY, to the extent
permitted by applicable law.
Last login: Wed Feb  5 12:49:09 2014 from 10.0.2.2
vagrant@galera001:~$ sudo su -
root@galera001:~# puppet agent -t
Info: Creating a new SSL key for galera001.olindata.vm
Info: Caching certificate for ca
Info: csr_attributes file loading from /etc/puppet/csr_attributes.yaml
Info: Creating a new SSL certificate request for galera001.olindata.vm
Info: Certificate Request fingerprint (SHA256): A2:FF:3B:6F:7C:BA:FF:5B:65:C7:36:6F:CF:D2:FD:10:50:7C:63:7E:26:F1:F5:06:54:B8:C5:E7:2D:E2:17:37
Info: Caching certificate for galera001.olindata.vm
Info: Caching certificate_revocation_list for ca
Info: Caching certificate for ca
Info: Retrieving plugin
Notice: /File[/var/lib/puppet/lib/puppet]/ensure: created
Notice: /File[/var/lib/puppet/lib/puppet/provider]/ensure: created
Notice: /File[/var/lib/puppet/lib/puppet/provider/database_user]/ensure: created
[..snip..]
Notice: /Stage[main]/Profile::Mysql::Base/Package[xtrabackup]/ensure: ensure changed 'purged' to 'latest'
Info: Class[Mcollective::Server::Config]: Scheduling refresh of Class[Mcollective::Server::Service]
Info: Class[Mcollective::Server::Service]: Scheduling refresh of Service[mcollective]
Notice: /Stage[main]/Mcollective::Server::Service/Service[mcollective]: Triggered 'refresh' from 1 events
Info: Creating state file /var/lib/puppet/state/state.yaml
Notice: Finished catalog run in 137.24 seconds

When the puppet agent run is finished, we do a similar round of pkill and service start:

root@galera001:~# pkill -9ef mysql
root@galera001:~# ps aux | grep mysql
root     12077  0.0  0.1   7828   876 pts/0    S+   08:28   0:00 grep mysql
root@galera001:~# service mysql start
[FAIL] Starting MySQL (Percona XtraDB Cluster) database server: mysqld[....] Please take a look at the syslog. ... failed!
 failed!

If you then look at the mysql error log, it will output something like this after a few seconds, indicating the node has joined our cluster:

root@galera001:~# tail /var/log/mysql/error.log
2014-04-25 08:29:00 12900 [Note] WSREP: inited wsrep sidno 1
2014-04-25 08:29:00 12900 [Note] WSREP: SST received: 7019fb90-cc8d-11e3-9540-1248cb76bdcb:6
2014-04-25 08:29:00 12900 [Note] WSREP: 0.0 (galera001): State transfer from 1.0 (galera000) complete.
2014-04-25 08:29:00 12900 [Note] WSREP: Shifting JOINER -> JOINED (TO: 6)
2014-04-25 08:29:00 12900 [Note] /usr/sbin/mysqld: ready for connections.
Version: '5.6.15-63.0'  socket: '/var/lib/mysql/mysql.sock'  port: 3306  Percona XtraDB Cluster (GPL), Release 25.5, wsrep_25.5.r4061
2014-04-25 08:29:00 12900 [Note] WSREP: Member 0 (galera001) synced with group.
2014-04-25 08:29:00 12900 [Note] WSREP: Shifting JOINED -> SYNCED (TO: 6)
2014-04-25 08:29:00 12900 [Note] WSREP: Synchronized with group, ready for connections
2014-04-25 08:29:00 12900 [Note] WSREP: wsrep_notify_cmd is not defined, skipping notification.

next up is a little hack. There's a galera-specific dependency error in the mysql module where it will try to create the root user with password before it writes that info to the ~/.my.cnf file (which is used by the module's commands to avoid needing any hard-coded root password). Since fixing the module is outside of the scope of this article, we'll cheat a little bit. Create a /root/.my.cnf file like this:

root@galera001:~# cat .my.cnf
[client]
user=root
host=localhost
password='khbrf9339'
socket=/var/lib/mysql/mysql.sock

After that, the puppet agent run will complete succesfully:

root@galera001:~# puppet agent -t
Info: Retrieving plugin
Info: Loading facts in /var/lib/puppet/lib/facter/etckepper_puppet.rb
Info: Loading facts in /var/lib/puppet/lib/facter/pe_version.rb
Info: Loading facts in /var/lib/puppet/lib/facter/concat_basedir.rb
Info: Loading facts in /var/lib/puppet/lib/facter/puppet_vardir.rb
Info: Loading facts in /var/lib/puppet/lib/facter/facter_dot_d.rb
Info: Loading facts in /var/lib/puppet/lib/facter/root_home.rb
Info: Caching catalog for galera001.olindata.vm
Info: Applying configuration version '1398437502'
Notice: /Stage[main]/Xinetd/Service[xinetd]/ensure: ensure changed 'stopped' to 'running'
Info: /Stage[main]/Xinetd/Service[xinetd]: Unscheduling refresh on Service[xinetd]
Notice: Finished catalog run in 3.04 seconds

The last step is to restart the xinetd service one more time:

root@galera001:~# service xinetd restart
[ ok ] Stopping internet superserver: xinetd.
[ ok ] Starting internet superserver: xinetd.

Now, rinse and repeat the steps for galera001 on galera002 and galera003:

  • write the .my.cnf file
  • run puppet agent -t
  • pkill mysql, then start the service manually
  • run puppet agent -t again
  • service xinetd restart

After all this is done, run puppet agent -t on all nodes one more time, specifically on galera000, as this has an haproxy running on it that will help us load balance the connections. This haproxy automatically configures galera nodes as they come up, and a puppet agent run will take care of this.

haProxy

This demo cluster comes with an haproxy instance running on galera000. It's http status page should be accessible from the host directly, giving you an insight into what the status of all nodes is. If you did the above all succesfully, the result should be like so:

Open a browser on your host and go to: [http://192.168.56.100/haproxy?stats]

haproxy stats

We have created two listeners by default, with slightly different behaviour:

1) One listener (galera_reader, port 13306) divides incoming queries round-robin over it's backends. This can be used to send all select queries to. 2) The second listener (galera_writer, port 13307) always directs sessions at the same server, unless that one is unavailable. This can be used to send all write-traffic to.

This process assumes your application can make a split in such a way. This is common in applications that used to be run on classic asynchronous replication previously. If your app can't do this, start by sending all traffic to galera_writer. Then slowly implement functionality that makes selects go to galera_reader.

Note that galera is synchronous replication and in theory you can send your writes to any node. In practice however, this is not so simple when concurrency goes up. This discussion is not for this blog post however.

Summary

You are now ready to send queries to the two ports on the haproxy node, and watch them be distributed over the galera cluster. Feel free to play around by shutting down certain nodes, then watch them come back up.

In a next article I'll discuss the puppet repository structure that is used for this article.

Hiera has some problems that need fixing

$
0
0

When puppet 3 was released, one of the biggest changes in puppet's history became a first class citizen: hiera. Initially written by R.I.Pienaar, hiera provides a great way to separate configuration data from configuration code and logic.

Since then, it has been widely adopted. For clients we've implemented it properly for, in combination with roles and profiles the cleanup of code is remarkable. However, hiera has it's own fair share of problems, some of which are not always clear until later on. I'll list a few of them in this post.

Too many levels of hierarchy.

When you create too many levels of hierarchy, it becomes unclear what configuration values live where. Generally, more then let's say 5 levels of hierarchy is not really recommended.

Different backends get read sequentially

Hiera supports multiple backends, even simultaneously. This is in itself a great feature, as it allows you to read configuration data from different sources. However, hiera first scans all hierarchy levels in one backend, and only if nothing is found will it continue searching in the next backend. This is not always the behavior you would expect or wish for. It can also be used in your advantage though, as you can use a specific backend as a sort of override for your common backend. Be careful with that kind of approach though, it quickly becomes hard to manage.

Lack of an API / daemon

Since hiera is just a script that gets called, there is no daemon. Since there is no daemon, there is also no API for external tools to call. They can either parse the hiera configuration files and figure things out by themselves, or fire up hiera for each call. 

Merge_behaviour

When you wish to grab data from different hierarchy levels, things get complicated. Especially when you want to combine hashes of values, you have to choose: do you want a key that exists on one hierarchy level to be merged with the ones with the same name living on other levels? Or do you want to overwrite the highest level? You can control this behavior with the merge_behavior setting in the hiera config file, but it's a global setting. This means you can't change it for specific calls. Add to that that this is also true for code in modules, and things get a tad harder again: should you rely in a module on specific setting of merge_behavior or not?

Management of many keys across many levels

In really large environments, 1000's of keys across many different hierarchy levels is not uncommon. Unless you set up a standard for how keys are managed within each file, and which keys go on which levels, things will be fine. Questions to ask:

  • do you order keys within a single yaml file alphabetically! or group them by functionality and group those groups alphabetically (ie. do yum::repo::mysql and mysql::server::root_password go together?)
  • is anyone allowed to add a hierarchy level when they see fit?
  • do you use a hierarchy level for osfamily and one for operatingsystem?
  • what to do with values that are the same across all of your MySQL servers! but you don't have a hierarchy level for them? Do you specify those values on a global level or on a host-specific level?

These are just some of the problems you can encounter with hiera. Most of them can be mitigated by careful decisions and proper standards. And let me be clear: using hiera is still far recommended over not using hiera! However, as puppet matures, maybe it is time to think about hiera++?

Photo copyright Michael Coghlan


How to judge modules from the puppet forge

$
0
0

On the puppet module forge we find a lot of modules (around 2400 at the time I'm writing this). However, anyone who has looked around the puppet forge has figured out that a lot of these modules are of questionable quality, and that's putting it nicely.

In the puppet fundamentals training we give, there's a chapter about the puppet forge. One of the questions i always get is "how can I distinguish the good modules from the bad ones?". The problem is that there's not a single answer to that question. However, there are a number of things we can investigate to see if a module is a good one. Here's a list of indicators. Not one of these is good enough to judge by itself, but judging all of these together gives a good indication of the quality of a module at the very least:

Number of downloads

The most obvious metric is the number of downloads a module has had (it's very prominently displayed on the forge search result page) Unfortunately there's a problem with this: it doesn't say anything about the quality of the module. In addition, since many people use this as their main metric ("if others use it it must be good quality"), this means that the ones that have the most downloads will just stay at the top because of the number of downloads they have, not because the quality is so good.

GitHub 

Next, we turn our eye to the github repository that is behind the forge module. Almost all modules are hosted on GitHub these days, so just click through the module into the github repository. To get there, go to a module's page and click "Project URL" near the top of the page.

Github stats

At the top of the github repository page, we see a number of statistics that indicate how mature the module is. The most important ones: 

  • the number of commits, the higher this is the more work has been done on this module
  • the number of contributors. A good module has more then 1 contributor, a great module has 20 or more (the puppet labs MySQL module has over 100 as of this moment )
  • number of open issues and pull requests. If both of these happen to be 0, click on them to check it out more. A perfect module will have 0 issues and 0 pull requests open, but it's more likely a good and actively developed module has a low number of issues and pull requests open. Check out the issues and pull requests, how long have they been there? Is there any activity on them?
  • last commit. When was the last commit? A good module is likely under continuous development, so the last commit shouldn't be more then a few days ago. Anything over 6 months indicates no active development.

Github readme

Next we look at the readme and any other documentation. Is it complete with examples of usage? Does it tell us what the module can do, which classes, defines and functions it contains and how to use them? That means the author intended for others to use the module, usually a good sign.

Rspec code

A great module comes with a full suite of tests for rspec and maybe even beaker. If there's a spec directory in the module, browse through it. Does it contain a good suite of tests? That means the author(s) spent time on that, another good indicator.

Code quality

Next up, browse through some of the manifests in the module. Do you see cleanly outlined code? Is there any coding no-no's like the usage of defined() or unnecessary execs? Are class parameters validated at the top of each class?

Author

Over time, you'll realize that some authors can be implicitly trusted to write at the very least decent modules. Research those before researching modules from authors you don't know.

Puppet enterprise supported

A relatively new appearance is puppet enterprise supported modules. These modules can be used on both open source as well as enterprise puppet, but they are officially supported by puppet labs if you use them on puppet enterprise.

All of the above 'symptoms' together give you a great indication of which modules are good and which ones are not. If you find something wrong with a module, or you are missing a piece of functionality, your best options always to fork the module, add  the functionality you're missing or unhappy about to it, and submit a pull request. If everyone adds bits and pieces over time, modules can become very feature complete over time, benefiting all of the people that use it.

Tackling Windows with Puppet!!

$
0
0

And I'm back!! It has been a long 3 months for me. For those who is not aware, I've been in the UAE working on-site for a client project. It is a very challenging project for a *nix Systems Administrator like me because the project was to automate a large number of Windows applications for a client. Windows! The first thing that came into my mind was the horror of using Windows but at the same time I'm rather excited because it is an uncharted domain for both me and OlinData as we've never work with Puppet on Windows before. Over the course of the 3 months, I've gotten accustomed to developing Windows Puppet modules and I'll share with you some tips what one should know when writing a Windows Puppet module. Do not get me wrong but Puppetlabs do have some great article about Puppet and Windows here and here but all of the below came from personal experience and is not documented (or I might've just missed them).

Powershell is your best friend!

In Linux, we work on the terminal most of our time, 95% in fact but in Windows everything is point and click on the GUI. Puppet works best on the command-line and the only GUI you'll ever see with Puppet is the web console itself. Knowing how to use Powershell will bring you a long way with Puppet and Windows especially when you have to execute tasks in Windows. For example, instead of using Server Manager to add/remove roles via the GUI, we'll use the Powershell's cmdlet to make them work.

  exec { 'Install SMTP server':
    command => '(Import-Module ServerManager; Add-WindowsFeature SMTP-Server)',
    path    => $::path,
    onlyif  => 'Import-Module ServerManager; if((Get-WindowsFeature SMTP-Server).installed | select-string -pattern "True") { exit 1}',
    provider  => 'powershell',
  }

The code snipppet above describes how one can use Powershell module written by Josh Cooper to install an SMTP server role via the ServerManager cmdlet and making it idempotent at the same time. As we progress to more complex requirements we can build a defined resource type that centers around exec and and Powershell provider itself. Learning Powershell is quite a steep curve but its an absolute necessity when building a Windows module.

Make your life easier with built-in tools

Despite what I have written about Powershell above, there are some Windows roles that can be installed using the built-in DISM (Deployment Image Servicing and Management) feature which allows one to enable specific role via the command line without the need of any Powershell's cmdlets. The Puppet's dism module allows one to interact with DISM directly to enable/disable specific roles and features. The only caveat with the DISM module is that you would need to explicitly specify dependencies between the roles and features.

class iis {
 
  Dism { ensure => present, }
  dism { 'IIS-WebServerRole': } ->
  dism { 'IIS-WebServer': }
 
}

32-bit redirections

Now that we know we can use both the Powershell and DISM to work your Puppet magic around Windows servers. There still persist a problem that I've encountered during my 3-months stint with the client in UAE where Puppet (which runs as 32-bit application) tries to execute 64-bit application on 64-bit machine. Windows being itself and user-friendly, it'll remap back your running application to the 32-bit application even if you want to run the 64-bit one. This made me want to pull my hair out and caused me a significant amount of time to debug as I come from a *nix background.

The workaround to this problem is that Windows provide a Sysnative directory where we can call the application directly from the path itself without having the filesystem to redirect us to different architecture. Without this workaround Puppet will keep spewing errors and naturally you'll start banging your head somewhere.

Hence instead of running exec with C:\Windows\System32\dism.exe, we will run exec with C:\windows\sysnative\dism.exe instead to prevent the system redirection. Josh Cooper's Powershell module does prevent this issue. If you are using the exec resource type with the core provider, do not forget your Sysnative workaround.

No more clicking

One pointer to remember when building a Windows based Puppet module, all Windows application/software that is to be installed via Puppet MUST support silent installation. Without silent installation you are pretty much unable to install the application itself. One would need to investigate whether it possible to do a silent installation with the installation parameters, answer files or even environment variables in Windows. Up to date, I've noticed that installers from RIM (Blackberry), VB/ASP based and some custom wrappers does not support silent installation method itself. You might need to talk to your vendors.

Although .msi or .exe wrappers are possible to do silent installation, there might be a limitation to configuring the application itself such as the installation path, serial key, and etc. This should not defer you to use Puppet to do a post-installation configuration of the application provided it is not too complex. Another pointer, DO NOT put the installation binaries on the Puppet fileserver! Always fetch it from another location, ideally a mounted NAS drive. We want to keep Puppet as lightweight as possible.

  package { 'Microsoft SQL Server 2008 R2 Native Client':
    ensure  => present,
    source  => '\\NAS\Installation Media\sqlncli.msi',
    provider  => 'windows',
    install_options => [ { 'USERNAME' => 'Administrator' }, { 'COMPANYNAME' => 'Company' }, { 'IACCEPTSQLNCLILICENSETERMS' => 'YES'} ],
  }

Vagrant, beaker, etc is your (other) best friend

Testing, testing, testing, and testing. That's what you'll need when developing any Puppet module but with Windows, I can't stress enough how important it is to do all the testing before deployment. Virtual machines with snapshots such as Vagrant, Virtualbox, and VMWare made it so simple for me to test all the Puppet modules without having the need to keep rebuilding the VM itself. All we have to do is just revert to the base snapshot before any form of puppet apply / agent run was made. Nuff said!

What is thy name?

Windows and Linux, although they are different operating system but they share same notion where each service and package have their own naming convention that are used by the operating system.

PS C:\> puppet resource service
  service { 'KSecPkg':
    ensure => 'running',
    enable => 'true',
  }
  service { 'KeyIso':
    ensure => 'stopped',
    enable => 'manual',
  }
  service { 'KtmRm':
    ensure => 'stopped',
    enable => 'manual',
  }
  ...
PS C:\> puppet resource package
  package { 'Microsoft .NET Framework 4 Client Profile':
    ensure => '4.0.30319',
  }
  package { 'Microsoft .NET Framework 4 Extended':
    ensure => '4.0.30319',
  }
  package { 'Puppet Enterprise':
    ensure => '3.2.1',
  }
  ...

We can obtain the service names and package name from Windows' Programs and Features whilst the service name can be obtained from the task manager itself. For Windows administrators, it is very familiar to them but for a *nix administrator we would need these resources on how to get proper information out of the system itself. And as we can see that it's really the same the same kind of way to manage them. Basically because we have these providers these great providers that are underneath that. You say “service” whether it's Linux or Windows and Puppet can figure it out okay. And this is why I love Puppet. Everything just works out for you!

First impressions of the new cfacter

$
0
0

Having just come back from PuppetConf 2014 in the San Francisco last week, there have been a lot of new developments. Some of the most significant changes since Puppet was started all the way 10 years back have been announced.

A lot of the announced changes have to do with a move away from Ruby as the language for everything puppet. We have already seen this started when PuppetDB was implemented in Clojure. That has been largely succesful and without pain, so there's good hope we can extend that into the rest of the toolset.

The first two tools we can see moving away from Ruby can already be tried today: cfacter and puppetserver. I got down and dirty with cfacter and here are my initial findings:

cfacter

First up I took a look at cfacter. Right now we're in very early release stages, so you have to build it manually, which is always a pain. I tried building on a clean Ubuntu 14.04 machine with puppetserver running without too many issues, following the instructions. Even with my extremely limited C++ knowledge this went without a glitch. 

Execution time

One of the first things I was interested in was execution time, as this is cited to be one of the main drivers to do this. I was not disappointed:

root@cobalt:~# time facter > /dev/null
real0m3.830s
user0m0.593s
sys0m0.913s

root@cobalt:~# time ./cfacter/release/bin/cfacter > /dev/null
real0m0.358s
user0m0.220s
sys0m0.103s

That is a lot of speed improvement (10.7 times to be exact)! This might not seem too significant, but it is when you think of an environment running the agent  every half an hour across hundreds or thousands of machines. This is all time that those machines can now spend on other stuff then just facter. It also means the time it takes to do a puppet agent run will come down, as facter is executed as part of that.

Debug messages

Debugging properly has been an issue for a long time with facter. Particularly running facter in --debug mode would show little of what was going on. No more! Here's some debug output from both to compare:

Here's old-school facter:

value for network_ip6tnl0 is still nil
value for network_ip_vti0 is still nil
value for network_sit0 is still nil
value for network_teql0 is still nil
value for network_tunl0 is still nil
Found no suitable resolves of 2 for swapencrypted
value for swapencrypted is still nil
Found no suitable resolves of 1 for ec2_metadata
value for ec2_metadata is still nil
Found no suitable resolves of 1 for ec2_userdata
value for ec2_userdata is still nil
Found no suitable resolves of 2 for iphostnumber
value for iphostnumber is still nil

That doesn't tell me anything! Unless you happen to be a/the facter developer, this is basically useless.

root@cobalt:~/cfacter/release/bin# ./cfacter --debug
  2014-09-28 22:21:45.603598 INFO  puppetlabs.facter.main - executed with command line: --debug.
  2014-09-28 22:21:45.604366 DEBUG puppetlabs.facter.ruby - searching "/usr/lib" for ruby libraries.
  2014-09-28 22:21:45.604775 DEBUG puppetlabs.facter.ruby - found candidate ruby library /usr/lib/libruby-1.9.1.so.1.9.1.
  2014-09-28 22:21:45.605693 INFO  puppetlabs.facter.ruby - ruby loaded from "/usr/lib/libruby-1.9.1.so.1.9.1".
  2014-09-28 22:21:45.605784 DEBUG puppetlabs.facter.util.posix.dynamic_library - symbol rb_funcallv not found in library /usr/lib/libruby-1.9.1.so.1.9.1, trying alias rb_funcall2.
  2014-09-28 22:21:45.605859 DEBUG puppetlabs.facter.util.posix.dynamic_library - symbol rb_float_new_in_heap not found in library /usr/lib/libruby-1.9.1.so.1.9.1, trying alias rb_float_new.
  2014-09-28 22:21:45.605916 DEBUG puppetlabs.facter.util.posix.dynamic_library - symbol rb_ary_new_capa not found in library /usr/lib/libruby-1.9.1.so.1.9.1, trying alias rb_ary_new2.
  2014-09-28 22:21:45.605986 DEBUG puppetlabs.facter.util.posix.dynamic_library - symbol ruby_setup not found in library /usr/lib/libruby-1.9.1.so.1.9.1.
  2014-09-28 22:21:45.610042 INFO  puppetlabs.facter.ruby - using ruby version 1.9.3 to resolve custom facts.
  2014-09-28 22:21:45.618980 INFO  puppetlabs.facter.main - resolving all facts.
  2014-09-28 22:21:45.619069 DEBUG puppetlabs.facter.facts.collection - fact "cfacterversion" has resolved to "0.2.0".
  2014-09-28 22:21:45.619113 DEBUG puppetlabs.facter.facts.collection - fact "facterversion" has resolved to "0.2.0".
  2014-09-28 22:21:45.619310 DEBUG puppetlabs.facter.facts.collection - skipping external facts for "/etc/facter/facts.d": No such file or directory
  2014-09-28 22:21:45.619354 DEBUG puppetlabs.facter.facts.collection - skipping external facts for "/etc/puppetlabs/facter/facts.d": No such file or directory
  2014-09-28 22:21:45.619381 DEBUG puppetlabs.facter.facts.collection - no external facts were found.
  2014-09-28 22:21:45.619717 DEBUG puppetlabs.facter.ruby - loading all custom facts.
  2014-09-28 22:21:45.619779 DEBUG puppetlabs.facter.facts.resolver - resolving kernel facts.
  2014-09-28 22:21:45.619830 DEBUG puppetlabs.facter.facts.collection - fact "kernel" has resolved to "Linux".
  2014-09-28 22:21:45.619871 DEBUG puppetlabs.facter.facts.collection - fact "kernelrelease" has resolved to "3.15.4-x86_64-linode45".
  2014-09-28 22:21:45.619924 DEBUG puppetlabs.facter.facts.collection - fact "kernelversion" has resolved to "3.15.4".
  2014-09-28 22:21:45.619967 DEBUG puppetlabs.facter.facts.collection - fact "kernelmajversion" has resolved to "3.15".
  2014-09-28 22:21:45.620007 DEBUG puppetlabs.facter.facts.resolver - resolving operating system facts.
  2014-09-28 22:21:45.620046 DEBUG puppetlabs.facter.facts.resolver - resolving Linux Standard Base facts.
  2014-09-28 22:21:45.620115 DEBUG puppetlabs.facter.execution - executing command: /usr/bin/lsb_release -i -s
  2014-09-28 22:21:45.669448 DEBUG | - Ubuntu
  2014-09-28 22:21:45.669568 DEBUG puppetlabs.facter.execution - process exited with status code 0.
  2014-09-28 22:21:45.669654 DEBUG puppetlabs.facter.facts.collection - fact "lsbdistid" has resolved to "Ubuntu".
  2014-09-28 22:21:45.669710 DEBUG puppetlabs.facter.execution - executing command: /usr/bin/lsb_release -r -s
  2014-09-28 22:21:45.720320 DEBUG | - 14.04

That looks great! I can actually see what's happening and why, yay!

Fact values

This is where it gets a bit more tricky. I wanted to see what the differences were between facts returned by facter, and those by cfacter. To this end, I executed a simple test:

facter > facter.txt && ./cfacter > cfacter.txt

I then ran a diff on the resulting two files. 

diff -u cfacter.txt facter.txt--- cfacter.txt 2014-09-28 22:30:30.567496574 +0000
+++ facter.txt  2014-09-28 22:30:30.207508905 +0000

Here are some of the things I noticed:

  1. Some facts changed value
    -macaddress => f2:3c:91:56:53:d9
    +macaddress => 66:2c:0e:85:17:6e

    What now?! Turns out the reported mac address changes from the very first interface encountered to the first connected network link:

    root@cobalt:~# ip link show dummy0
    2: dummy0: <BROADCAST,NOARP> mtu 1500 qdisc noop state DOWN mode DEFAULT group default
        link/ether 66:2c:0e:85:17:6e brd ff:ff:ff:ff:ff:ff
    root@cobalt:~# ip link show eth0
    3: eth0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast state UP mode DEFAULT group default qlen 1000
        link/ether f2:3c:91:56:53:d9 brd ff:ff:ff:ff:ff:ff

    This actually looks like it fixed a bug, but I can still imagine this causing a problem or two for people who are unaware.

  2. Some facts are new: mountpoints, partitions, cfacterversion, dhcpservers. These all look very useful.
  3. Some facts have disappeared: puppetversion (bug?), rubyversion (duh, we're in c++ now) and path for instance
  4. Some structured facts come out in a differently ordered hash. This is not a problem as far as I can tell.

Conclusion

The initial findings look good. Of course the bug database is still very small but it's growing quite fast since the announcement last week at PuppetConf 2014. I have a bigger concern though: the number of people familiar enough with C++ to contribute seems to be a lot smaller then the number of people with Ruby skills. By fragmenting the languages in which different parts of puppet are written, there's a risk of making it much harder for people to contribute to Puppet across the toolset.

We can just hope that this doesn't mean that open source contributions will be any lower. This can only be seen in time. 

Brief insight on the all new Puppet Server!

$
0
0

It has been a while since the PuppetConf 2014 ended in San Francisco and there are lot of news that came out of it as usual. But, the biggest news is during the keynote that Luke Kanies, CEO of PuppetLabs announced there will be a new Puppet Server. A new Puppet Server? Yup. A Puppet Server and its not the same as our current Puppet Master. Curious on what this means I decided to check out the preview. All my opinions stated below are my own and does not reflect any of OlinData or PuppetLabs.

Language

When Luke announced the new Puppet Server in PuppetConf 2014 here, what caught my eye was they are rewriting it in Clojure and using JVM. I'm not a big fan of JVM but they have their own reservation about it. I'm not going to say I'm an expert on Clojure or JVM but Puppetlabs has been using it for a while now with PuppetDB. PuppetDB has been excellent on that premise but yet I wonder why the language change? In their blog post here where Chris Price lays out the details of Trapperkeeper, the Clojure framework used in Puppetlabs. There are a lot of interesting comments on how it would work with Puppet. Our CEO Walter Heck also wrote a blog post here on what he thought about Clojure and Trapperkeeper. As stable as Clojure and JVM has been proven with PuppetDB, I'm just not a fan where it takes a while for the process to startup. When starting or restarting the service, it'll be at least 30 seconds before you can connect to the service. 30 seconds is a long time for a service to start.

Performance

The first thing that I would test for the Puppet Server is the performance. As I mentioned earlier, PuppetDB is also built on top of JVM and Clojure and it has performed remarkably which got me thinking how would the new Puppet Server perform? Comparing both Puppet Master and Puppet Server setup where an agent is classified with a heavy duty Puppet module, I'd say that the Puppet Server performs a lot faster compared to the Puppet Master. It can be just a couple seconds faster but hey, it is an improvement nevertheless! Especially when multiplied with hundreds or thousands of agents. I have yet to test it under a full load though. With that said, if it can improve the performance that much even being a preview version, I can't help to wonder how the final release would turn out to be. Judging based on all the Puppet releases, I'd dare say it will improve the performance tremendously. Something to look forward to!

Configuration

Seeing how Puppet Server is intended to be a drop-in replacement for the Puppet Master, another quirk(s) that I have is the configuration folders. Quoted from the Puppet Server's Github documentation:

Puppet Server honors almost all settings in puppet.conf, and should pick them up automatically. It does introduce new settings for some "Puppet Server"-specific things, though

This statement got me confused for a moment, almost all and not all ? It was made clear to me after I've setup the Puppet Server. There are now 2 folders where you can configure your Puppet Master/Server. One in the usual path of /etc/puppet/ and the other in /etc/puppetserver/. So now we have 2 configuration folders and let's just say while Puppet Server's configuration is more extensive, it gets pretty messy as you have so many configuration files for various purposes and on top of that you are still using the legacy puppet.conf. I wonder what is the rationale where Puppetlabs does not make use of the existing configuration file or setup instead of introducing a whole new set?

Puppet Components

Now, in this section I'd share with you my findings on Puppet's components itself. As part of being a drop-in replacement for the current Puppet Master, Puppetlabs is removing the need for Apache and Passenger in favor of a built in webserver (Trapperkeeper). Kudos to this as now we do not really have to rely on an external component/application to get Puppet Master/Server running in an optimal condition. However, what if we do not want to run it as a webserver but as a daemon instead? How do I decouple all the components of Puppet (CA, daemon, etc)?

Guess this is where the configuration comes in handy. Although I expressed my opinion on how bloated the configuration became, it also gave me a simple way to enable or disable a specific Puppet component on the server. I can have CA or the Puppet Server process running on the server simply by commenting or uncommenting specific lines in the configuration files.

Secondly, what of the custom written modules which usually contains custom plugins, facts, providers, type, and etc? They are written in Ruby and how does Puppet Server handles backward compatibility? Chris Price gave a brief explanation on what is under the hood for the new Puppet Server here but what caught my eye was this line:

Thanks to the excellent JRuby project, we’re able to run portions of the existing Puppet Ruby code from inside of our Clojure application. To achieve this, during initialization, we create a pool of several JRuby interpreters. You can think of these as embedded Ruby virtual machines in which we can execute portions of the existing Puppet Ruby code.

Now, I guess my questions are answered. Puppet Server is backward compatible and I suppose we can still write our modules' providers, types, facts, and etc in Ruby. Extension points are being introduced and metrics are now introduced to provide a more robust insight on what is happening within Puppet. It also allows us to keep writing custom codes with Ruby instead of using Clojure. With more extension points opened up, we can definitely develop more custom components for Pupppet. This got me interested even more as OlinData has just released OpsTheatre not long ago and I would love to see the integration between the both of them.

Puppet Apps

Together, both Luke and Chris introduced the new concept of Puppet Apps with the preview release of Puppet Server. Puppet Apps is where PuppetLabs now separate the core application from the features. This allows PuppetLabs to continue development on the existing core product without affecting the features. Another plus point here. This meant that any new development or new features integrated with the application's core will not break the overall application and allows developers to build more features.

The first app built is the Puppet Node Manager which is only available on the Puppet Enterprise allows node classification based on rules. Nifty! Too bad it is not released for Open Source or I might've missed the announcement somewhere on that.

Final Word

Puppet Server is just at a release preview state and there are more to come. As many quirks I have and there are also a lot of improvements that comes with it. It is still too early to say whether Puppet Server replacing Puppet Master is a progressive or regressive improvement for PuppetLabs and configuration management field in general. For now I'll just sit back and watch the spectacle to unfold before me.

Puppet Master Agent Setup

$
0
0

Puppet is a ruby based configuration management tool (IT automation software), licensed under Apache 2.0 designed to help system administrators, automate many repetitive task they regularly perform . It is open-source, flexible, customizable framework for managing the configuratons of computer system. It defines and enforce the state of your infrustructure throughout the software development cycle. Puppet ensures consistency and dependability across your infrustrucutre.

Puppet can be used to manage configuration on Unix, Linux and Microsoft Windows platform. Puppet can manage hosts throughout it's life cycle: begining from initial builds and installation, to upgrade, maintenance and finally, end-of -life. Puppet is designed to continiously interect with the hosts.

The server is called as Puppet Master, the Puppet Client software is called as agent. Host itself is called as node.

Desired State of Puppet Infrustructure

  • Facts: The puppet agent on each node, send data about the node's state called as facts to the puppet master server.
  • Catalog: The puppet master uses fact to compile detailed data about how the node should be configured, called the catalog and send it back to the puppet agent. The puppet agent makes any changes as per the catalog. The agent can also simulate these changes in --noop mode i.e. dry run.
  • Report: Each puppet agent sends report to the puppet master, indicating any changes made to the configuration.
  • Report Collection: Puppets open API can send data to the third-party tool. So, infrustructure can be shared with other teams.

Puppet Installation:

Puppet can be installed on a variety of different Platforms including RedHat, Centos, Debian, Ubuntu, Soalris etc...  The newest version of puppet can be installed from the package repository ie yum.puppetlabs.com for RedHat/Centos and apt.puppetlabs.com for Debian/Ubuntu systems. Before, installing add proper dns entry to the hosts file.

In order to illustrate, Puppet Master agent setup. the below server is being used.

192.168.56.12       centos12.vm       Puppet Master
192.168.56.13       ubuntu13.vm      Puppet Agent

On Master:

Installing Puppet Repository.

$ wget http://yum.puppetlabs.com/puppetlabs-release-el-6.noarch.rpm
$ rpm -ivh http://yum.puppetlabs.com/puppetlabs-release-el-6.noarch.rpm

[root@centos12 ~]# wget http://yum.puppetlabs.com/puppetlabs-release-el-6.noarch.rpm
--2014-08-11 08:41:52--  http://yum.puppetlabs.com/puppetlabs-release-el-6.noarch.rpm
Resolving yum.puppetlabs.com... 198.58.114.168, 2600:3c00::f03c:91ff:fe69:6bf0
Connecting to yum.puppetlabs.com|198.58.114.168|:80... connected.
HTTP request sent, awaiting response... 200 OK
Length: 5668 (5.5K) [application/x-redhat-package-manager]
Saving to: `puppetlabs-release-el-6.noarch.rpm'

100%[=================================================================================================================>] 5,668       --.-K/s   in 0.001s  

2014-08-11 08:41:58 (8.51 MB/s) - `puppetlabs-release-el-6.noarch.rpm' saved [5668/5668]

[root@centos12 ~]# rpm -ivh puppetlabs-release-el-6.noarch.rpm 
warning: puppetlabs-release-el-6.noarch.rpm: Header V4 RSA/SHA1 Signature, key ID 4bd6ec30: NOKEY
Preparing...                ########################################### [100%]
   1:puppetlabs-release     ########################################### [100%]
[root@centos12 ~]# 

 

On master, you need to install the puppet and puppet-server from the puppetlabs repository

$ yum install puppet puppet-server

[root@centos12 ~]# yum install puppet puppet-server
Loaded plugins: fastestmirror, security
....
....
---> Package ruby-irb.i686 0:1.8.7.352-13.el6 will be installed
--> Finished Dependency Resolution

Dependencies Resolved

===========================================================================================================================================================
 Package                               Arch                        Version                                  Repository                                Size
===========================================================================================================================================================
Installing:
 puppet                                noarch                      3.6.2-1.el6                              puppetlabs-products                      1.3 M
 puppet-server                         noarch                      3.6.2-1.el6                              puppetlabs-products                       24 k
Installing for dependencies:
 augeas-libs                           i686                        1.0.0-5.el6_5.1                          updates                                  306 k
 facter                                i386                        1:2.1.0-1.el6                            puppetlabs-products                       89 k
 hiera                                 noarch                      1.3.4-1.el6                              puppetlabs-products                       23 k
 libselinux-ruby                       i686                        2.0.94-5.3.el6_4.1                       base                                      97 k
 ruby                                  i686                        1.8.7.352-13.el6                         updates                                  534 k
 ruby-augeas                           i386                        0.4.1-3.el6                              puppetlabs-deps                           21 k
 ruby-irb                              i686                        1.8.7.352-13.el6                         updates                                  314 k
 ruby-libs                             i686                        1.8.7.352-13.el6                         updates                                  1.6 M
 ruby-rdoc                             i686                        1.8.7.352-13.el6                         updates                                  377 k
 ruby-rgen                             noarch                      0.6.5-2.el6                              puppetlabs-deps                          237 k
 ruby-shadow                           i386                        1:2.2.0-2.el6                            puppetlabs-deps                           12 k
 rubygem-json                          i386                        1.5.5-1.el6                              puppetlabs-deps                          763 k
 rubygems                              noarch                      1.3.7-5.el6                              base                                     207 k
Updating for dependencies:
 libselinux                            i686                        2.0.94-5.3.el6_4.1                       base                                     108 k
 libselinux-utils                      i686                        2.0.94-5.3.el6_4.1                       base                                      81 k

Transaction Summary
===========================================================================================================================================================
Install      15 Package(s)
Upgrade       2 Package(s)

 Installing : puppet-3.6.2-1.el6.noarch                                                                                                             16/19 
 Installing : puppet-server-3.6.2-1.el6.noarch .............
....

[root@centos12 ~]# /etc/init.d/puppetmaster start
Starting puppetmaster:                                     [  OK  ]
[root@centos12 ~]# 

[root@centos12 ~]# ps -ef | grep puppet
puppet    2125     1  1 09:02 ?        00:00:00 /usr/bin/ruby /usr/bin/puppet master
[root@centos12 ~]# 

Now, Puppet Master is running on the Puppet server.

 

On Agent:

Installing Puppet Repository.

$ wget http://apt.puppetlabs.com/puppetlabs-release-stable.deb
$ dpkg -i puppetlabs-release-stable.deb 

root@ubuntu13:~# wget http://apt.puppetlabs.com/puppetlabs-release-stable.deb
--2014-08-11 13:10:20--  http://apt.puppetlabs.com/puppetlabs-release-stable.deb
Resolving apt.puppetlabs.com (apt.puppetlabs.com)... 198.58.114.168, 2600:3c00::f03c:91ff:fe69:6bf0
Connecting to apt.puppetlabs.com (apt.puppetlabs.com)|198.58.114.168|:80... connected.
HTTP request sent, awaiting response... 200 OK
Length: 3430 (3.3K) [application/x-debian-package]
Saving to: ‘puppetlabs-release-stable.deb’

100%[=================================================================================================================>] 3,430       --.-K/s   in 0s      

2014-08-11 13:10:26 (19.7 MB/s) - ‘puppetlabs-release-stable.deb’ saved [3430/3430]

root@ubuntu13:~# dpkg -i puppetlabs-release-stable.deb 
Selecting previously unselected package puppetlabs-release.
(Reading database ... 55700 files and directories currently installed.)
Unpacking puppetlabs-release (from puppetlabs-release-stable.deb) ...
Setting up puppetlabs-release (1.0-7) ...
root@ubuntu13:~# 

On agent, you need to install only the puppet client.

$ apt-get install puppet
.....
.....
Get:1 http://in.archive.ubuntu.com/ubuntu/ saucy/main augeas-lenses all 1.1.0-0ubuntu2 [273 kB]
Get:2 http://in.archive.ubuntu.com/ubuntu/ saucy/main debconf-utils all 1.5.50ubuntu1 [56.5 kB]                                                           
Get:3 http://in.archive.ubuntu.com/ubuntu/ saucy/main libruby1.9.1 i386 1.9.3.194-8.1ubuntu2 [4,184 kB]                                                   
Get:4 http://in.archive.ubuntu.com/ubuntu/ saucy/main ruby1.9.1 i386 1.9.3.194-8.1ubuntu2 [37.6 kB]                                                       
Get:5 http://in.archive.ubuntu.com/ubuntu/ saucy/main ruby all 1:1.9.3 [4,826 B]                                                                          
Get:6 http://in.archive.ubuntu.com/ubuntu/ saucy/main facter all 1.7.0-1ubuntu1 [76.0 kB]                                                                 
Get:7 http://in.archive.ubuntu.com/ubuntu/ saucy/main libaugeas0 i386 1.1.0-0ubuntu2 [176 kB]                                                             
Get:8 http://in.archive.ubuntu.com/ubuntu/ saucy/main ruby-augeas i386 0.5.0-1 [14.6 kB]                                                                  
Get:9 http://in.archive.ubuntu.com/ubuntu/ saucy/main libaugeas-ruby all 0.5.0-1 [1,604 B]                                                                
Get:10 http://in.archive.ubuntu.com/ubuntu/ saucy/main ruby-safe-yaml all 0.9.3-1 [16.5 kB]                                                               
Get:11 http://in.archive.ubuntu.com/ubuntu/ saucy/main puppet-common all 3.2.4-2ubuntu2 [950 kB]                                                          
Get:12 http://in.archive.ubuntu.com/ubuntu/ saucy/main puppet all 3.2.4-2ubuntu2 [12.9 kB]                                                                
Fetched 5,803 kB in 36s (159 kB/s)                                                                                                                        
Selecting previously unselected package libyaml-0-2:i386.
(Reading database ... 55706 files and directories currently installed.)
Unpacking libyaml-0-2:i386 (from .../libyaml-0-2_0.1.4-2build1_i386.deb) ...
Selecting previously unselected package augeas-lenses.
Unpacking augeas-lenses (from .../augeas-lenses_1.1.0-0ubuntu2_all.deb) ...
Selecting previously unselected package debconf-utils.
Unpacking debconf-utils (from .../debconf-utils_1.5.50ubuntu1_all.deb) ...
Selecting previously unselected package libruby1.9.1.
.....
.....

Now, the puppet agent is installed properly.

 

Puppet Configuration:

On mosts platforms, Puppet's configuration file will be located under /etc/puppet directory. The configuration file is called as puppet.conf on Unix/Linux operating systems. While the Puppet installation, this config file is automatically created. In-case, required the below command can be used to create the configuration file.

$ cd /etc/puppet
$ puppet master --genconfig > puppet.conf

At this stage, we are going to add only one entry to the configuration file. We will add server value to the main section of the config file.

[Main]
server=centos12.vm

Replace, the server name with th fully qualified domain name (fqdn) of the hosts. Restart the puppet master, after making changes to the config file.

 

Puppet Certificate Signup

On agent: Run the below command on the agent to generate certificate.

$ puppet agent -t

root@ubuntu13:~# puppet agent -t
Info: Caching certificate for ca
Info: Creating a new SSL certificate request for ubuntu13.vm
Info: Certificate Request fingerprint (SHA256): 0E:C0:C3:5C:A2:8A:6C:60:9C:20:92:79:71:E2:74:6E:7B:B2:2C:0C:E1:77:50:D5:72:29:C4:2F:5E:DE:95:47
Exiting; no certificate found and waitforcert is disabled
root@ubuntu13:~# 

On master: Run the bleow command on the puppet master to show all the agent nodes with certificates. This command list all the certificates waiting to be signed.

$ puppet cert list --all

[root@centos12 ~]# puppet cert list --all
  "ubuntu13.vm" (SHA256) 0E:C0:C3:5C:A2:8A:6C:60:9C:20:92:79:71:E2:74:6E:7B:B2:2C:0C:E1:77:50:D5:72:29:C4:2F:5E:DE:95:47
+ "centos12.vm" (SHA256) 78:5D:B5:D7:33:B6:E3:10:11:D6:C2:79:12:C2:12:FA:F6:25:8B:82:3D:FF:9A:B3:CF:2A:CE:30:7A:B5:08:D9 (alt names: "DNS:centos12.vm", "DNS:puppet", "DNS:puppet.vm")
[root@centos12 ~]# 

The above command is showing two agent certificates. The puppet master acts as puppet agent too.

Use the below command to signup the certificates.

$ puppet cert sign ubuntu13.vm

[root@centos12 ~]# puppet cert sign ubuntu13.vm
Notice: Signed certificate request for ubuntu13.vm
Notice: Removing file Puppet::SSL::CertificateRequest ubuntu13.vm at '/var/lib/puppet/ssl/ca/requests/ubuntu13.vm.pem'
[root@centos12 ~]#  

After signing the certificate, run the bleow command.

$ puppet agent -t

root@ubuntu13:~# puppet agent -t
Info: Caching certificate_revocation_list for ca
Info: Retrieving plugin
Info: Caching catalog for ubuntu13.vm
Info: Applying configuration version '1407766889'
Info: Creating state file /var/lib/puppet/state/state.yaml
Notice: Finished catalog run in 0.06 seconds
root@ubuntu13:~# 

The agent is now authenticated with the master.

Conclusion:

The Puppet Master Agent setup is done. Play with it and see how easy it is. Now, from the Puppent Master, you can control all the agent nodes, without connecting to agent box (ssh)

 

 

Setup Puppet Master with Passenger and Apache on CentOS 6.5

$
0
0

Puppet includes a basic puppet master web server based on Ruby’s WEBrick library. This default server cannot be used for real-life loads, as it can’t handle concurrent connections; it is only suitable for small tests with ten nodes or fewer. You must configure a production quality web server before you start managing your nodes with Puppet i.e. more robust – namely Apache and Passenger.

Passenger:

Passenger (AKA mod_rails or mod_rack) is an Apache 2.x module which lets you run Rails or Rack applications inside a general purpose web server, like Apache httpd or nginx.

Installing Passenger and Apache:

Make sure puppet master has been run at least once, so that all required SSL certificates are in place.

Install all the prequisite and thier dependencies.

$ sudo yum install httpd httpd-devel mod_ssl ruby-devel rubygems gcc
$ sudo yum install -y openssl-devel curl-devel gcc-c++ zlib-devel make

Next, install the rack and passenger, and their dependencies:

$ sudo gem install rack passenger

[root@centos15 puppet]# gem install rack passenger
Successfully installed rack-1.5.2
Building native extensions.  This could take a while...
Successfully installed rake-10.3.2
Successfully installed daemon_controller-1.2.0
Successfully installed passenger-4.0.52
4 gems installed
Installing ri documentation for rack-1.5.2...
Installing ri documentation for rake-10.3.2...
Installing ri documentation for daemon_controller-1.2.0...
Installing ri documentation for passenger-4.0.52...
Installing RDoc documentation for rack-1.5.2...
Installing RDoc documentation for rake-10.3.2...
Installing RDoc documentation for daemon_controller-1.2.0...
Installing RDoc documentation for passenger-4.0.52...
[root@centos15 puppet]#

$ sudo passenger-install-apache2-module

[root@centos15 puppet]# passenger-install-apache2-module
Welcome to the Phusion Passenger Apache 2 module installer, v4.0.52.

This installer will guide you through the entire installation process. It
shouldn't take more than 3 minutes in total.

Here's what you can expect from the installation process:

 1. The Apache 2 module will be installed for you.
 2. You'll learn how to configure Apache.
 3. You'll learn how to deploy a Ruby on Rails application.

Don't worry if anything goes wrong. This installer will advise you on how to
solve any problems.

Press Enter to continue, or Ctrl-C to abort.

--------------------------------------------

Which languages are you interested in?

Use <space> to select.
If the menu doesn't display correctly, press '!'‣ ⬢  Ruby
   ⬢  Python
   ⬡  Node.js
   ⬡  Meteor25h

--------------------------------------------

Checking for required software...

 * Checking for C compiler...
      Found: yes
      Location: /usr/bin/cc
 * Checking for C++ compiler...
      Found: yes
      Location: /usr/bin/c++
 * Checking for Curl development headers with SSL support...
      Found: yes
      curl-config location: /usr/bin/curl-config
      Supports SSL: yes
      Usable: yes
      Version: libcurl 7.19.7
      Header location: /usr/include/curl/curl.h
 * Checking for OpenSSL development headers...
      Found: yes
      Location: /usr/include/openssl/ssl.h
 * Checking for Zlib development headers...
      Found: yes
      Location: /usr/include/zlib.h
 * Checking for Apache 2...
      Found: yes
      Location of httpd: /usr/sbin/httpd
      Apache version: 2.2.15
 * Checking for Apache 2 development headers...
      Found: yes
      Location of apxs2: /usr/sbin/apxs
 * Checking for Rake (associated with /usr/bin/ruby)...
      Found: yes
      Location: /usr/bin/ruby /usr/bin/rake
 * Checking for OpenSSL support for Ruby...
      Found: yes
 * Checking for RubyGems...
      Found: yes
 * Checking for Ruby development headers...
      Found: yes
      Location: /usr/lib64/ruby/1.8/x86_64-linux/ruby.h
 * Checking for rack...
      Found: yes
 * Checking for Apache Portable Runtime (APR) development headers...
      Found: yes
      Location: /usr/bin/apr-1-config
      Version: 1.3.9
 * Checking for Apache Portable Runtime Utility (APU) development headers...
      Found: yes
      Location: /usr/bin/apu-1-config
      Version: 1.3.9

--------------------------------------------

Sanity checking Apache installation...
All good!

--------------------------------------------
Compiling and installing Apache 2 module...
....
....

--------------------------------------------
Almost there!

Please edit your Apache configuration file, and add these lines:

   LoadModule passenger_module /usr/lib/ruby/gems/1.8/gems/passenger-4.0.52/buildout/apache2/mod_passenger.so
   <IfModule mod_passenger.c>
     PassengerRoot /usr/lib/ruby/gems/1.8/gems/passenger-4.0.52
     PassengerDefaultRuby /usr/bin/ruby
   </IfModule>

After you restart Apache, you are ready to deploy any number of web
applications on Apache, with a minimum amount of configuration!

Press ENTER to continue.

--------------------------------------------

Deploying a web application: an example

Suppose you have a web application in /somewhere. Add a virtual host to your
Apache configuration file and set its DocumentRoot to /somewhere/public:

   <VirtualHost *:80>
      ServerName www.yourhost.com
      # !!! Be sure to point DocumentRoot to 'public'!
      DocumentRoot /somewhere/public    
      <Directory /somewhere/public>
         # This relaxes Apache security settings.
         AllowOverride all
         # MultiViews must be turned off.
         Options -MultiViews
         # Uncomment this if you're on Apache >= 2.4:
         #Require all granted
      </Directory>
   </VirtualHost>

And that's it! You may also want to check the Users Guide for security and
optimization tips, troubleshooting and other useful information:

  /usr/lib/ruby/gems/1.8/gems/passenger-4.0.52/doc/Users guide Apache.html
  https://www.phusionpassenger.com/documentation/Users%20guide%20Apache.html

Enjoy Phusion Passenger, a product of Phusion (www.phusion.nl) :-)
https://www.phusionpassenger.com

Phusion Passenger is a trademark of Hongli Lai & Ninh Bui.
[root@centos15 puppet]#

Configure Apache:

To configure Apache to run the puppet master application, you must:

  • Install the puppet master Rack application, by creating a directory for it and copying the config.ru file from the Puppet source.
  • Create a virtual host config file for the puppet master application, and install/enable it.

Steps looks like below.

[root@centos15 puppet]# mkdir -p /usr/share/puppet/rack/puppetmasterd
[root@centos15 puppet]# mkdir /usr/share/puppet/rack/puppetmasterd/public /usr/share/puppet/rack/puppetmasterd/tmp
[root@centos15 puppet]# cp /usr/share/puppet/ext/rack/config.ru /usr/share/puppet/rack/puppetmasterd/
[root@centos15 puppet]# chown puppet:puppet /usr/share/puppet/rack/puppetmasterd/config.ru
[root@centos15 puppet]# 

Vhost Configuration:

This Apache Virtual Host configures the puppet master on the default puppetmaster port (8140). You can also see a similar file at ext/rack/example-passenger-vhost.conf in the Puppet source.

[root@centos15 ~]# cat /etc/httpd/conf.d/puppetmaster.conf
# You'll need to adjust the paths in the Passenger config depending on which OS
# you're using, as well as the installed version of Passenger.

# Debian/Ubuntu:
#LoadModule passenger_module /var/lib/gems/1.8/gems/passenger-4.0.x/ext/apache2/mod_passenger.so
#PassengerRoot /var/lib/gems/1.8/gems/passenger-4.0.x
#PassengerRuby /usr/bin/ruby1.8

# RHEL/CentOS:
#LoadModule passenger_module /usr/lib/ruby/gems/1.8/gems/passenger-4.0.x/ext/apache2/mod_passenger.so
#PassengerRoot /usr/lib/ruby/gems/1.8/gems/passenger-4.0.x
#PassengerRuby /usr/bin/ruby

   LoadModule passenger_module /usr/lib/ruby/gems/1.8/gems/passenger-4.0.52/buildout/apache2/mod_passenger.so
   <IfModule mod_passenger.c>
     PassengerRoot /usr/lib/ruby/gems/1.8/gems/passenger-4.0.52
     PassengerDefaultRuby /usr/bin/ruby
   </IfModule>

# And the passenger performance tuning settings:
# Set this to about 1.5 times the number of CPU cores in your master:
PassengerMaxPoolSize 6
# Recycle master processes after they service 1000 requests
PassengerMaxRequests 500 
# Stop processes if they sit idle for 10 minutes
PassengerPoolIdleTime 300

Listen 8140
<VirtualHost *:8140>
    # Make Apache hand off HTTP requests to Puppet earlier, at the cost of
    # interfering with mod_proxy, mod_rewrite, etc. See note below.
    PassengerHighPerformance On

    SSLEngine On

    # Only allow high security cryptography. Alter if needed for compatibility.
    SSLProtocol ALL -SSLv2 -SSLv3
    SSLCipherSuite EDH+CAMELLIA:EDH+aRSA:EECDH+aRSA+AESGCM:EECDH+aRSA+SHA384:EECDH+aRSA+SHA256:EECDH:+CAMELLIA256:+AES256:+CAMELLIA128:+AES128:+SSLv3:!aNULL:!eNULL:!LOW:!3DES:!MD5:!EXP:!PSK:!DSS:!RC4:!SEED:!IDEA:!ECDSA:kEDH:CAMELLIA256-SHA:AES256-SHA:CAMELLIA128-SHA:AES128-SHA
    SSLHonorCipherOrder     on
    
    SSLCertificateFile      /var/lib/puppet/ssl/certs/centos15.vm.pem
    SSLCertificateKeyFile   /var/lib/puppet/ssl/private_keys/centos15.vm.pem
    SSLCertificateChainFile /var/lib/puppet/ssl/ca/ca_crt.pem 
    SSLCACertificateFile    /var/lib/puppet/ssl/ca/ca_crt.pem 
    SSLCARevocationFile     /var/lib/puppet/ssl/ca/ca_crl.pem     
    SSLVerifyClient         optional
    SSLVerifyDepth          1
    SSLOptions              +StdEnvVars +ExportCertData

    # Apache 2.4 introduces the SSLCARevocationCheck directive and sets it to none
	# which effectively disables CRL checking. If you are using Apache 2.4+ you must
    # specify 'SSLCARevocationCheck chain' to actually use the CRL.

    # These request headers are used to pass the client certificate
    # authentication information on to the puppet master process
    RequestHeader set X-SSL-Subject %{SSL_CLIENT_S_DN}e
    RequestHeader set X-Client-DN %{SSL_CLIENT_S_DN}e
    RequestHeader set X-Client-Verify %{SSL_CLIENT_VERIFY}e

    DocumentRoot /usr/share/puppet/rack/puppetmasterd/public

    <Directory /usr/share/puppet/rack/puppetmasterd/>
      Options None
      AllowOverride None
      # Apply the right behavior depending on Apache version.
      <IfVersion < 2.4>
        Order allow,deny
        Allow from all
      </IfVersion>
      <IfVersion >= 2.4>
        Require all granted
      </IfVersion>
    </Directory>

    ErrorLog /var/log/httpd/puppet-server.centos15.vm_ssl_error.log
    CustomLog /var/log/httpd/puppet-server.centos15.vm_ssl_access.log combined
</VirtualHost>
[root@centos15 ~]# 

Restart Apache Services:

Now, stop the existing WEBrick implementation:

[root@centos15 puppet]# /etc/init.d/puppetmaster stop
Stopping puppetmaster: [  OK  ]
[root@centos15 puppet]# /etc/init.d/htttpd start

[root@centos15 puppet]# chkconfig puppetmaster off
[root@centos15 puppet]# chkconfig httpd on

[root@centos15 ~]# puppet agent -t
Info: Retrieving pluginfacts
Info: Retrieving plugin
Info: Caching catalog for centos15.vm
Info: Applying configuration version '1412017802'
Notice: Finished catalog run in 0.02 seconds
[root@centos15 ~]# 

[root@centos15 ~]# passenger-status 
Version : 4.0.52
Date    : Mon Sep 29 15:10:29 -0400 2014
Instance: 2170
----------- General information -----------
Max pool size : 6
Processes     : 1
Requests in top-level queue : 0

----------- Application groups -----------
/usr/share/puppet/rack/puppetmasterd#default:
  App root: /usr/share/puppet/rack/puppetmasterd
  Requests in queue: 0
  * PID: 2320    Sessions: 0       Processed: 5       Uptime: 30s
    CPU: 2%      Memory  : 36M     Last used: 26s ago

[root@centos15 ~]# 

Conslusion:

You can view the passenger status at any time using the passenger-status command.

 

 

Testing OpsTheatre - Part One

$
0
0

Two months back OlinData announced OpsTheatre, a pluggable operations dashboard that runs on Node.js. Our in-house dev team believes this tool will be key in eliminating a growing issue in the DevOps community - the duplication of Operations Management tools. On top of that, we open sourced the project as we believe community support and involvement will be integral in the realization, manifestation and shaping of this vision.

Having released it to the public as soon as we were done with the Minimum Viable Product (MVP), I am glad to see that certain parties have started taking interest in this project. Before it is ready for use in production, a few key areas need to be addressed:

  • a testing framework
  • continuous integration
  • code coverage
  • OpsTheatre Puppet modules

Bear in mind, that this is just the beginning. OpsTheatre will not be limited to interfacing with Puppet, as we aim for it to support multiple DevOps tools. That said, development will for now be restricted to Puppet modules, so as to keep in line with the notion of focusing on one thing at a time and doing it well.

I'll be covering each topic in detail as the development of OpsTheatre progresses, starting with the testing framework that will be used for OpsTheatre.

Backend Testing

It is always good to start from the ground up, and so we begin with the OpsTheatre backend. I've used the Hiera module, node-puppet-hiera, for OpsTheatre to experiment with when deciding on which testing framework to go with. This wasn't a hard decision to make, as the backend is written on top of Node.js.

The libraries used, thus far, in writing test cases for node-puppet-hiera:

mocha needs no introduction - given its popularity as a testing framework. In fact, its pretty much the library-of-choice when it comes to testing Node.js applications. It has great support for both Test Driven Development (TDD) and Behaviour Driven Development (BDD). Given that the backend modules are not user-facing, and the test cases are meant to run independent of other modules - a la Unit Testing - it was clear that TDD was the more sensible approach to go with.

As for chai, it is an excellent library for assertion. It provides assertion styles for both TDD and BDD:

  • the expect/should style, for BDD assertions
  • and the good ol' assert style, for TDD assertions

It also integrates well with mocha, which does not come with a built-in assertion library. Personally, I feel this is a good thing, as it allows developers to decide on an assertion library of their choice.

Now that the libraries have been settled on, time for some code. First up, some quick skeletal coding - to figure out what sort of situations to test for during the run.

Skeletal code

To follow through an age old tradition, run the test case. It should break, as there isn't any code implemented at this point (technically there is, since it was part of the MVP, but you get my point).

Broken test case

Before the test cases can be run, some mock data is required. I added some fixtures to mock both Hiera configuration and data files. This way, there won't be any chance of Hiera file overwrites should someone accidentally run the test cases on a production server.

Fixtures

Time to add some implementation logic, for the various test scenarios. I started by adding logic to setup and teardown the prerequisites, i.e. mock Hiera configuration and data files

SetupTeardown

Next, I wrote logic to implement the test cases themselves

Test logic

Now to execute the test case, making sure to run it in TDD mode

mocha --ui tdd tests/index.js

Success

With the backend testing suite done up for node-puppet-hiera, test cases for other OpsTheatre backend modules can be written in similar fashion. With the backend test suites on their way to completion, we can proceed on to writing test cases for the frontend. In the next blog post, I will be covering writing test cases for the OpsTheatre AngularJS frontend. Stay tuned!


Call for papers for ConfigManagementCamp and FOSDEM Config Management room

$
0
0
One of my favorite conferences in 2014 was the first edition of Config Management Camp, back in February. A very rare occasion where we had major contributors and users from each of the current FOSS Configuration Management tools in a single building.  
We had three excellent keynotes in historical order by Mark Burgess, who brought us CFEngine, Luke Kanies, who brought us Puppet, and Adam Jacob who brought us Chef. None of them spoke about their own tool but instead they all discussed larger topics. Luke even did his keynote from the top of his head without any slides while still having a coherent story, quite impressive!
 
Connected to ConfigManagementCamp there was also a packed Config Management Devroom at the well known FOSDEM conference. We literally had people guarding the doors to stop visitors from coming into an absolutely packed room. High quality talks (schedule here and videos here) made for a great day.
 
I'm very excited to be part of the organisation of both of these events again this year. The current focus is on finding good session proposals. If you have an interesting topic, please consider submitting it before December 1st. The CFP for FOSDEM is here and the one for Config Management Camp is here.
 
Looking forward to seeing your submission(s) in the queue!

Puppet 4 is around the corner, this is whats new

$
0
0

Over the past few months we have been getting more and more information about Puppet 4, as well as the vast improvements to Puppet Enterprise. The array of new features and major changes is gigantic. A lot of these changes are very welcome and I'm excited to see what they will bring to the Puppet world. On the other hand, I'm also a bit afraid. Here's a non-exhaustive list of changes.

DISCLAIMER: This post is completely based on hear-say, speculation and informal information. None of it is official, so don't send me any death threats if it turns out to be different once Puppet 4 is released. Where possible, I have included the source of all this hearsay.

Removing lots of deprecated code

A lot of deprecated stuff is being removed in Puppet 4. This will make many people happy and cause an equal amount of people headaches. For a (near) full list of what is being removed see this issue

Some of the more important items highlighted:

Support for Ruby 1.8.7

This makes me happy. Ruby 1.8.7 has been a pain in many people's behind for a very long time (May 31st 2008 was the original release date ). In Puppet 4, there has been an active effort to weed out code that was needed to support 1.8.7. For those poor people stuck on Operating Systems that run 1.8.7 by default, see the below section on AIO packages.

Non-directory environment support

Directory environments have replaced config-file based environments for a relatively short amount of time, yet any non-directory environment support is going away. Even we still have several consulting clients running on config-file environments. I applaud removing this functionality, but I think many people will trip over this one while moving to Puppet 4. ticket here.

Stringified facts

First released in Puppet 3.5 (Changelog entry here ), facts have supported structures for a decent while now (actually, this was released in facter 2.0). In Puppet 4 you won't be able to turn off structured facts anymore, meaning references to any old-style facts will simply stop working. I foresee a lot of people needing code changes in order to take advantage of this. Ticket here

The Ruby DSL

I never really understood why there was a Ruby DSL. Maybe in the early days of Puppet some people transferring from Chef or other tools preferred it, but it was always kind of awkward. If you're using it these days, you are most likely just "doing it wrong". Good riddence! Ticket here

Puppet kick and friends

In the old days when many people ran puppet as an agent, we could kickstart agent runs by using puppet kick from the master. Over the course of the 2.x series of Puppet that has changed though, and the majority of people now run puppet through either mcollective or cronjobs. Puppet kick is going away

the import statement

Puppet's autoloading mechanism is a great feature, once you understand how it works. It makes code more readable as well as loading classes much faster. Before the autoloading mechanism existed though, there was the import statement. Nowadays, it is a sign of either an old codebase or bad design if you encounter the import statement. It will no longer be available in puppet 4.

node inheritance

Node inheritance was a feature that was introduced well before we had roles & profiles and hiera. It hasn't been recommended for quite while and I'm happy to see it go in puppet 4.

ERB is replaced by EPP

A new feature is the addition of EPP (Embedded PuPpet). It's basically a better version of ERB with input checking and validation so you can write more secure templates. The good thing is that it's built on top of ERB, so you can keep your ERB templates as is, and upgrade them as you go along. Read a bit more about EPP here

I foresee some interesting times for the more important open source modules though as they will have to maintain backward compatibility somehow while taking advantage of the new features.

AIO Packaging

Since support for Ruby 1.8.7 is going away, there is a problem for Operating Systems that come with Ruby 1.8.7 built-in. In addition there are some other problems with Puppet that occur on systems with Ruby gems that interfere with the puppet code. In order to mitigate these problems, it seems highly likely that the puppet agent will be packaged with it's own Ruby installation. This has some advantages and some disadvantages. The advantages are for instance a guarantee that nothing is interfering with puppet's Ruby. A disadvantage is that if there are for instance security or other bugs in gems packaged by Puppet Labs, we'll now have to start waiting for updates released by Puppet Labs. Another is that we can no longer do things like gem install blah if a gem is needed by our puppet code. Instead, the puppet binary got a new subcommand so we can say puppet gem install blah.

It remains to be seen how this works out in reality. For a good discussion, take a look at this mailing list topic

Rolling upgrade not recommended

The last point is a painful one: a rolling upgrade of a puppet 3.x to puppet 4.x master is not recommended (see Eric Sorensen's comment here). At first glance, that seams like it's not a big deal. But for a serious production environment, this means that it won't be possible to port over all agents at once. This in turn means that at least for a while, you'll need to backport code to your 3.x puppetmaster. It also means you need the infrastructure to set up a new puppet master, you'll need to adjust development workflows etc. For a service that should be as little trouble as possible, this now means extra time that could be spent on more productive things.

In Summary, I'm quite excited and a little scared about all the changes rolling around in Puppet land, but I think it's going to turn out for the better. We'll know more in 6 months from now!

Eyaml - Hiera Data Encryption

$
0
0
eyaml encryption

Hiera helps to seperate data from Puppet manifests. It let's us write and use reusable manifests and modules. Puppet classes can request the data they need from the hiera data store. Hiera reads environment specific key / value pairs (including passwords) from its own YAML files and parses them to Puppet. Puppet then populates templated configuration files and delivers them to the specified directories. However, Hiera still needs access to the passwords in order to pass them along to Puppet. If you store your passwords as plain text values, then there is still a problem of security. If someone was to compromise the Hiera repo where you are storing the key value pairs, then they would have access to all of your passwords.

The question arises, how to store sensitive configuration data such as password for mysql, public and private keys ? Onwe possible answer is using the eyaml backend hiera-eyaml.

Hiera-Eyaml is created by Tom Poulton. It makes it possible to encrypt sensitive data and share almost everything with other team members. This is a more ideal solution to keep the sensitive data safe. hiera-eyaml allows you to place blocks of encrypted text in plain text YAML files.

Installing hiera-eyaml

Use the below command to install hiera-eyaml

gem install hiera-eyaml

root@monit:/etc/puppet# gem install hiera-eyaml
Fetching: trollop-2.1.1.gem (100%)
Fetching: highline-1.6.21.gem (100%)
Fetching: hiera-eyaml-2.0.6.gem (100%)
Successfully installed trollop-2.1.1
Successfully installed highline-1.6.21
Successfully installed hiera-eyaml-2.0.6
3 gems installed
Installing ri documentation for trollop-2.1.1...
Installing ri documentation for highline-1.6.21...
Installing ri documentation for hiera-eyaml-2.0.6...
Installing RDoc documentation for trollop-2.1.1...
Installing RDoc documentation for highline-1.6.21...
Installing RDoc documentation for hiera-eyaml-2.0.6...
root@monit:/etc/puppet#

Generate keys

The first step is to create the pair of keys. These key will be used for encryption and decryption.

$ eyaml createkeys

root@monit:/etc/puppet# eyaml createkeys
[hiera-eyaml-core] Created key directory: ./keys
[hiera-eyaml-core] Keys created OK
root@monit:/etc/puppet# 

root@monit:/etc/puppet# ls -ltrh /etc/puppet/keys/
total 8.0K
-rw------- 1 root root 1.7K Feb 17 01:42 private_key.pkcs7.pem
-rw-r--r-- 1 root root 1.1K Feb 17 01:42 public_key.pkcs7.pem
root@monit:/etc/puppet#oot@monit:~# apt-get install icinga2

This creates a public and private key with default names in the default location. (./keys).

Securing keys

These keys are very important and need to be secured. The permissions for this folder should allow the puppet user (normally 'puppet') execute access to the keys directory, read only access to the keys themselves and restrict everyone else:

root@monit:/etc/puppet# chown -R puppet:puppet /etc/puppet/keys/
root@monit:/etc/puppet# chmod -R 0500 /etc/puppet/keys/
root@monit:/etc/puppet# chmod 0400 /etc/puppet/keys/*.pem
root@monit:/etc/puppet# ls -ltrh /etc/puppet/keys/
total 8.0K
-r-------- 1 puppet puppet 1.7K Feb 17 01:42 private_key.pkcs7.pem
-r-------- 1 puppet puppet 1.1K Feb 17 01:42 public_key.pkcs7.pem
root@monit:/etc/puppet# 

Encryption

You can use the eyaml command to input your sensitive data and receive an encrypted block. Below is just an illustration. Command syntax is shown below as well.

root@monit:/etc/puppet# cat /opt/info.eyaml 
---
mysql::server::root_password: *ds02ldje232,dsj32

root@monit:/etc/puppet# eyaml encrypt -s '*ds02ldje232,dsj32'
string: ENC[PKCS7,MIIBiQYJKoZIhvcNAQcDoIIBejCCAXYCAQAxggEhMIIBHQIBADAFMAACAQEwDQYJKoZIhvcNAQEBBQAEggEAoHYRjF1i5qCs+60jBpDLwBjPtLN70qyAFz1VpWtwbqBsLgnBk3ZLHzm8rskXsxaUz63Vs26ZfkswaHyZoav11h550BV2vcKp8T5XZPwWWOjEbsd0J21XAHMaVeaklVFyz/OCYOPWHjOILwn8GHLX5eqLRh3uRuBQNp+dcZ5WoVVlZ8odOo4qZukDAC2xg31/CyLGkml5wVXExzxGyOkYAfKTGb97qxc9Lqa4u9Tf+ylzJTt0cK6VKGiiQCPSdl+k4V7/OefGskkaaSrTOmSr9UNnlA3aGHQpAY+65rn6kDEolHzruvel+GsYRo+cdpal2wvw2R/eChjGVjOOMI7yBjBMBgkqhkiG9w0BBwEwHQYJYIZIAWUDBAEqBBDmFAiajr74K0naqYQJWKujgCDdCKSQ/IAZXmfqrnH/1W/MUpbZeqW9HNLe+blQYRE4kA==]

OR

block: >
    ENC[PKCS7,MIIBiQYJKoZIhvcNAQcDoIIBejCCAXYCAQAxggEhMIIBHQIBADAFMAACAQEw
    DQYJKoZIhvcNAQEBBQAEggEAoHYRjF1i5qCs+60jBpDLwBjPtLN70qyAFz1V
    pWtwbqBsLgnBk3ZLHzm8rskXsxaUz63Vs26ZfkswaHyZoav11h550BV2vcKp
    8T5XZPwWWOjEbsd0J21XAHMaVeaklVFyz/OCYOPWHjOILwn8GHLX5eqLRh3u
    RuBQNp+dcZ5WoVVlZ8odOo4qZukDAC2xg31/CyLGkml5wVXExzxGyOkYAfKT
    Gb97qxc9Lqa4u9Tf+ylzJTt0cK6VKGiiQCPSdl+k4V7/OefGskkaaSrTOmSr
    9UNnlA3aGHQpAY+65rn6kDEolHzruvel+GsYRo+cdpal2wvw2R/eChjGVjOO
    MI7yBjBMBgkqhkiG9w0BBwEwHQYJYIZIAWUDBAEqBBDmFAiajr74K0naqYQJ
    WKujgCDdCKSQ/IAZXmfqrnH/1W/MUpbZeqW9HNLe+blQYRE4kA==]
root@monit:/etc/puppet#

The encrypted value is available in two format i.e. string and block. Either one can be used. Next, copy and paste the text from after block: to the end of the block into your <filename>.eyaml file as you would any other Hiera yaml file.

Here’s the block in the <filename>.eyaml file. After replacing the actual password with encrypted block the file looks like below.

root@monit:/etc/puppet# cat /opt/info.eyaml 
---
mysql::server::root_password: >
    ENC[PKCS7,MIIBiQYJKoZIhvcNAQcDoIIBejCCAXYCAQAxggEhMIIBHQIBADAFMAACAQEw
    DQYJKoZIhvcNAQEBBQAEggEAoHYRjF1i5qCs+60jBpDLwBjPtLN70qyAFz1V
    pWtwbqBsLgnBk3ZLHzm8rskXsxaUz63Vs26ZfkswaHyZoav11h550BV2vcKp
    8T5XZPwWWOjEbsd0J21XAHMaVeaklVFyz/OCYOPWHjOILwn8GHLX5eqLRh3u
    RuBQNp+dcZ5WoVVlZ8odOo4qZukDAC2xg31/CyLGkml5wVXExzxGyOkYAfKT
    Gb97qxc9Lqa4u9Tf+ylzJTt0cK6VKGiiQCPSdl+k4V7/OefGskkaaSrTOmSr
    9UNnlA3aGHQpAY+65rn6kDEolHzruvel+GsYRo+cdpal2wvw2R/eChjGVjOO
    MI7yBjBMBgkqhkiG9w0BBwEwHQYJYIZIAWUDBAEqBBDmFAiajr74K0naqYQJ
    WKujgCDdCKSQ/IAZXmfqrnH/1W/MUpbZeqW9HNLe+blQYRE4kA==]

root@monit:/etc/puppet#

Now that you have added a block to the <filename>.eyaml file, you can edit and change the password at any time using the below command. 

root@monit:/etc/puppet# eyaml edit /opt/info.eyaml

The edited file looks like below. It also shows the actual password.

---
mysql::server::root_password:  >
    DEC(1)::PKCS7[*ds02ldje232,dsj32]!

Just edit the text between the brackets, save as you usually would in your default editor, and that’s it.

Encryption Syntax

To encrypt something, you only need the public_key, so distribute that to people creating hiera properties. 

$ eyaml encrypt -f filename            # Encrypt a file
$ eyaml encrypt -s 'hello there'       # Encrypt a string
$ eyaml encrypt -p                     # Encrypt a password (prompt for it)

Decryption Syntax

To decrypt something, you need the public_key and the private_key.

$ eyaml decrypt -f filename               # Decrypt a file
$ eyaml decrypt -s 'ENC[PKCS7,.....]'     # Decrypt a string

 

Hiera

To use eyaml with hiera and puppet, we have to first configure hiera.yaml (hiera configuration file) to use the eyaml backend. Syntax below.

root@monit:/etc/puppet# cat hiera.yaml 
---
:merge_behavior: deeper
:backends:
  - yaml
  - eyaml
:logger: console
:yaml:
  :datadir: '/etc/puppet/hieradata'
:eyaml:
  :datadir: '/etc/puppet/hieradata'
  :pkcs7_private_key: /etc/puppet/keys/private_key.pkcs7.pem
  :pkcs7_public_key: /etc/puppet/keys/public_key.pkcs7.pem
:hierarchy:
  - fqdn/%{fqdn}
  - env/%{environment}/%{fqdn}
  - osfamily/%{osfamily}
  - lsbdistcodename/%{lsbdistcodename}
  - common
root@monit:/etc/puppet# 

The default eyaml file extension is .eyaml, however this can be configured in the :eyaml block to set :extension

:eyaml::extension:'yaml'

Now, lets change the hiera file extension from .yaml to .eyaml As we are going to use eyaml for encrypting sensitive data, therefore it's better to use the default extension .eyaml. The actual hiera data file looks like below.

root@monit:/etc/puppet# mv hieradata/common.yaml hieradata/common.eyaml
root@monit:/etc/puppet# cat hieradata/common.eyaml 
---
profile::user::users:
  krishna:
    ensure: 'present'
    uid: 2001 
    comment: 'Krishna Prajapati'
    home: '/home/krishna'
    shell: '/bin/bash'
    managehome: true
profile::ssh_authorized_key::authkey:
  'krishna@krishna-Compaq-510':
    ensure: 'present'
    user: 'krishna'
    type: 'ssh-rsa'
    key: 'AAAAB3NzaC1yc2EAAAADsdfsfdsd4343dfdf434343sffsdAQABAAABAQDlrD31aHwYCkQzDT/VckAnUaJEPsFddnWADCbi2oVK3FQ7BXQtO4c9aw7c7jgmJdfdnauhvuWfI6l8mA2S76bvOBUTO4zVdF8jNSy9sshEPYVGexKlNUa65f0FsxtEobf+ZctWahAGeKUsLwv/nBTRquzIl81Wdyc8UB3xqYXl+mVl422wILymHO42342fsdfsf3434fsdfsdf33434dsdsd2u9LVCB5bof3O1SltererqwrihhA5Ytbjf3/u56xun+H2QB9tSa4gAlerwrwe43434fsdfsf3434r'
root@monit:/etc/puppet# 

Next, we have to encrypt the ssh key, (whatever you feel is critical/confidential, it can be encrypted.) The encryption is shown below.

root@monit:/etc/puppet# eyaml encrypt -s 'AAAAB3NzaC1yc2EAAAADsdfsfdsd4343dfdf434343sffsdAQABAAABAQDlrD31aHwYCkQzDT/VckAnUaJEPsFddnWADCbi2oVK3FQ7BXQtO4c9aw7c7jgmJdfdnauhvuWfI6l8mA2S76bvOBUTO4zVdF8jNSy9sshEPYVGexKlNUa65f0FsxtEobf+ZctWahAGeKUsLwv/nBTRquzIl81Wdyc8UB3xqYXl+mVl422wILymHO42342fsdfsf3434fsdfsdf33434dsdsd2u9LVCB5bof3O1SltererqwrihhA5Ytbjf3/u56xun+H2QB9tSa4gAlerwrwe43434fsdfsf3434r' hieradata/common.eyaml 
string: ENC[PKCS7,MIICzQYJKoZIhvcNAQcDoIICvjCCAroCAQAxggEhMIIBHQIBADAFMAACAQEwDQYJKoZIhvcNAQEBBQAEggEAmG0Nfo3lzWvmyHgPI0/B7IlF8oMSCUEIYfgeBlNIhVI/JJHXaQ7eNRiURROkliRPEXTjktm9YbqUq61FWwJGnr7dh8+Jet+niVl+FReAqUCpQmlRo7K58EfHG6J7bumKG6GF7zBlLKuLULuoq6FKYZDQ7y1SVfJP3x9tUD2lHFplLC9qyx/cyQOlQo8nRNmBAfSZYQB9VB3Wr61WyK9T+ddrCzYaZZQS/MeZ2S43ltuyWjKoO5lC9uZz3r6J0uyqSgyZmAtes86mnLuvFCSi5NDHCLI6/yKMbyqX42afdTKJDKe3Uzu4NXUzus/zS+nkJDlKifC49AY8mSkq4p2WMzCCAY4GCSqGSIb3DQEHATAdBglghkgBZQMEASoEEKYavyVBnCg02u4rXjRHN8mAggFg1XKsWgc2W7Cy2iCm41cLEj5XLpFaLDWFAgbDKbW8V5oQI3uTeDbuNCyK4oVoqxInuCcrfR9LOfXc6NF7hZMZ6nlOTJ9dVf5ctPr5mh/GMrH5VevqWlT0590Hk96YRyhfwEn+999lNwf1dcUjxH3jS/nOWsK8JJ2T1Vg5R73ioxOu3AY1JARDjcy+HxW8MaUxPTDr1NsV0bgQBJwTIO2YuDEsZSlpfU61jbil5Y0YGNMKPoHrVXt8MA5eqRnBv1tBfEX6y/TNOQF+eA/Rxh5ObKFeUzJuEXlyLv5J49jfEnb0FnN1a4zvOrrpM8ZY8HV4azyTSRCrclegBaFRw5fB0h/4eYeFQ1y9Mu7Wuji0XOLJ3GvItVzgByzETSJO0HNhc/iA2XM/jTfytQi/Zn3ecNEMC9C+5cfyAqSA6JL/N1UNwFxiCHdLUf4aPCBJvSeavB17V4HR4W6w4kvQiL3fOg==]

OR

block: >
    ENC[PKCS7,MIICzQYJKoZIhvcNAQcDoIICvjCCAroCAQAxggEhMIIBHQIBADAFMAACAQEw
    DQYJKoZIhvcNAQEBBQAEggEAmG0Nfo3lzWvmyHgPI0/B7IlF8oMSCUEIYfge
    BlNIhVI/JJHXaQ7eNRiURROkliRPEXTjktm9YbqUq61FWwJGnr7dh8+Jet+n
    iVl+FReAqUCpQmlRo7K58EfHG6J7bumKG6GF7zBlLKuLULuoq6FKYZDQ7y1S
    VfJP3x9tUD2lHFplLC9qyx/cyQOlQo8nRNmBAfSZYQB9VB3Wr61WyK9T+ddr
    CzYaZZQS/MeZ2S43ltuyWjKoO5lC9uZz3r6J0uyqSgyZmAtes86mnLuvFCSi
    5NDHCLI6/yKMbyqX42afdTKJDKe3Uzu4NXUzus/zS+nkJDlKifC49AY8mSkq
    4p2WMzCCAY4GCSqGSIb3DQEHATAdBglghkgBZQMEASoEEKYavyVBnCg02u4r
    XjRHN8mAggFg1XKsWgc2W7Cy2iCm41cLEj5XLpFaLDWFAgbDKbW8V5oQI3uT
    eDbuNCyK4oVoqxInuCcrfR9LOfXc6NF7hZMZ6nlOTJ9dVf5ctPr5mh/GMrH5
    VevqWlT0590Hk96YRyhfwEn+999lNwf1dcUjxH3jS/nOWsK8JJ2T1Vg5R73i
    oxOu3AY1JARDjcy+HxW8MaUxPTDr1NsV0bgQBJwTIO2YuDEsZSlpfU61jbil
    5Y0YGNMKPoHrVXt8MA5eqRnBv1tBfEX6y/TNOQF+eA/Rxh5ObKFeUzJuEXly
    Lv5J49jfEnb0FnN1a4zvOrrpM8ZY8HV4azyTSRCrclegBaFRw5fB0h/4eYeF
    Q1y9Mu7Wuji0XOLJ3GvItVzgByzETSJO0HNhc/iA2XM/jTfytQi/Zn3ecNEM
    C9C+5cfyAqSA6JL/N1UNwFxiCHdLUf4aPCBJvSeavB17V4HR4W6w4kvQiL3f
    Og==]
root@monit:/etc/puppet# 

Either of the encrypted value can be copied and used in hiera file. After adding the encrypted value the hiera file looks like below. 

root@monit:/etc/puppet# cat hieradata/common.eyaml 
---
profile::user::users:
  krishna:
    ensure: 'present'
    uid: 2001 
    comment: 'Krishna Prajapati'
    home: '/home/krishna'
    shell: '/bin/bash'
    managehome: true
profile::ssh_authorized_key::authkey:
  'krishna@krishna-Compaq-510':
    ensure: 'present'
    user: 'krishna'
    type: 'ssh-rsa'
    key: ENC[PKCS7,MIICzQYJKoZIhvcNAQcDoIICvjCCAroCAQAxggEhMIIBHQIBADAFMAACAQEwDQYJKoZIhvcNAQEBBQAEggEAmG0Nfo3lzWvmyHgPI0/B7IlF8oMSCUEIYfgeBlNIhVI/JJHXaQ7eNRiURROkliRPEXTjktm9YbqUq61FWwJGnr7dh8+Jet+niVl+FReAqUCpQmlRo7K58EfHG6J7bumKG6GF7zBlLKuLULuoq6FKYZDQ7y1SVfJP3x9tUD2lHFplLC9qyx/cyQOlQo8nRNmBAfSZYQB9VB3Wr61WyK9T+ddrCzYaZZQS/MeZ2S43ltuyWjKoO5lC9uZz3r6J0uyqSgyZmAtes86mnLuvFCSi5NDHCLI6/yKMbyqX42afdTKJDKe3Uzu4NXUzus/zS+nkJDlKifC49AY8mSkq4p2WMzCCAY4GCSqGSIb3DQEHATAdBglghkgBZQMEASoEEKYavyVBnCg02u4rXjRHN8mAggFg1XKsWgc2W7Cy2iCm41cLEj5XLpFaLDWFAgbDKbW8V5oQI3uTeDbuNCyK4oVoqxInuCcrfR9LOfXc6NF7hZMZ6nlOTJ9dVf5ctPr5mh/GMrH5VevqWlT0590Hk96YRyhfwEn+999lNwf1dcUjxH3jS/nOWsK8JJ2T1Vg5R73ioxOu3AY1JARDjcy+HxW8MaUxPTDr1NsV0bgQBJwTIO2YuDEsZSlpfU61jbil5Y0YGNMKPoHrVXt8MA5eqRnBv1tBfEX6y/TNOQF+eA/Rxh5ObKFeUzJuEXlyLv5J49jfEnb0FnN1a4zvOrrpM8ZY8HV4azyTSRCrclegBaFRw5fB0h/4eYeFQ1y9Mu7Wuji0XOLJ3GvItVzgByzETSJO0HNhc/iA2XM/jTfytQi/Zn3ecNEMC9C+5cfyAqSA6JL/N1UNwFxiCHdLUf4aPCBJvSeavB17V4HR4W6w4kvQiL3fOg==]
root@monit:/etc/puppet# 

At any point of time the hiera file can be edited using the below command to change the ssh-key/password. During the edit mode, eyaml shows the actual password/ssh-key. The editing of password/ssh-key should be done within the square braces. The edit mode is smart enough to track the decrypted blocks and re-encrypt only those that have changed.

root@monit:/etc/puppet# eyaml edit hieradata/common.eyaml
---
profile::user::users:
  krishna:
    ensure: 'present'
    uid: 2001
    comment: 'Krishna Prajapati'
    home: '/home/krishna'
    shell: '/bin/bash'
    managehome: true
profile::ssh_authorized_key::authkey:
  'krishna@krishna-Compaq-510':
    ensure: 'present'
    user: 'krishna'
    type: 'ssh-rsa'
    key: DEC(1)::PKCS7[AAAAB3NzaC1yc2EAAAADsdfsfdsd4343dfdf434343sffsdAQABAAABAQDlrD31aHwYCkQzDT/VckAnUaJEPsFddnWADCbi2oVK3FQ7BXQtO4c9aw7c7jgmJdfdnauhvuWfI6$ 

Finally, puppet run will make the changes into effect. 

root@monit:/etc/puppet# puppet agent -t
Info: Retrieving pluginfacts
Info: Retrieving plugin
Info: Loading facts
Info: Caching catalog for monit.olindata.com
Info: Applying configuration version '1424234747'
Notice: /Stage[main]/Profile::Ssh_authorized_key/Ssh_authorized_key[krishna@krishna-Compaq-510]/key: key changed 'AAAAB3NzaC1yc2EAAAADsdfsfdsd4343dfdf434343sffsdAQABAAABAQDlrD31aHwYCkQzDT/VckAnUaJEPsFddnWADCbi2oVK3FQ7BXQtO4c9aw7c7jgmJdfdnauhvuWfI6l8mA2S76bvOBUTO4zVdF8jNSy9sshEPYVGexKlNUa65f0FsxtEobf+ZctWahAGeKUsLwv/nBTRquzIl81Wdyc8UB3xqYXl+mVl422wILymHO42342fsdfsf3434fsdfsdf33434dsdsd2u9LVCB5bof3O1SltererqwrihhA5Ytbjf3/u56xun+H2QB9tSa4gAlerwrwe43434fsdfsf3434r' to 'AAAAB3NzaC1yc2EAAAADsdfsfdsd4454545454545454544343dfdf434343sffsdAQABAAABAQDlrD31aHwYCkQzDT/VckAnUaJEPsFddnWADCbi2oVK3FQ7BXQtO4c9aw7c7jgmJdfdnauhvuWfI6l8mA2S76bvOBUTO4zVdF8jNSy9sshEPYVGexKlNUa65f0FsxtEobf+ZctWahAGeKUsLwv/nBTRquzIl81Wdyc8UB3xqYXl+mVl422wILymHO42342fsdfsf3434fsdfsdf33434dsdsd2u9LVCB5bof3O1SltererqwrihhA5Ytbjf3/u56xun+H2QB9tSa4gAlerwrwe43434fsdfsf3434r'
Info: Computing checksum on file /home/krishna/.ssh/authorized_keys
Notice: Finished catalog run in 0.04 seconds
root@monit:/etc/puppet# 

Conclusion

Eyaml ensures that critical data is encrypted and secured. Nodes that get the critical information will receive catalog with decrypted value. The catalog to the node is delivered via SSL so the information is still protected. Eyaml makes your infra more secure and robust.

Now, it's the time to move all of the sensitive data into hiera and eyaml. Feel free to try. Happy Learning.

 

 

Setup Puppet Server on CentOS 7.0

$
0
0
puppetserver

Puppet Server is a next-generation alternative to our current puppet master, which builds on the successful Clojure technology stack underlying products like PuppetDB. Puppet Server is an application that runs on the Java Virtual Machine (JVM) and provides the same services as the classic Puppet master application. It mostly does this by running the existing Puppet master code in several JRuby interpreters, but it replaces some parts of the classic application with new services written in Clojure.

Puppet Server is one of two recommended ways to run the Puppet master service; the other is a Rack server. Today they’re mostly equivalent — Puppet Server is easier to set up and performs better under heavy loads, but they provide the same services. In the future, Puppet Server’s features will further surpass the Rack Puppet master.

System Requirements

Puppet Server is configured to use 2GB of RAM by default. Just to play around with an installation on a Virtual Machine. However, ram can be reduced to minimum of 512MB in test enviornment. Make sure you have good amount of ram when using puppet server in production though to guarantee optimal performance.

Puppet Server: Installing

Enable the puppetlabs package repository i.e. for centos and debian based systems. Check the operating system version and download the compatiable repository package.

[root@puppetserver ~]# wget http://yum.puppetlabs.com/puppetlabs-release-el-7.noarch.rpm
--2015-03-01 20:12:25--  http://yum.puppetlabs.com/puppetlabs-release-el-7.noarch.rpm
Resolving yum.puppetlabs.com (yum.puppetlabs.com)... 198.58.114.168, 2600:3c00::f03c:91ff:fe69:6bf0
Connecting to yum.puppetlabs.com (yum.puppetlabs.com)|198.58.114.168|:80... connected.
HTTP request sent, awaiting response... 200 OK
Length: 10160 (9.9K) [application/x-redhat-package-manager]
Saving to: ‘puppetlabs-release-el-7.noarch.rpm’

100%[===================================================================================================================>] 10,160      --.-K/s   in 0s      

2015-03-01 20:12:32 (61.8 MB/s) - ‘puppetlabs-release-el-7.noarch.rpm’ saved [10160/10160]

[root@puppetserver ~]# rpm -ivh puppetlabs-release-el-7.noarch.rpm
warning: puppetlabs-release-el-7.noarch.rpm: Header V4 RSA/SHA1 Signature, key ID 4bd6ec30: NOKEY
Preparing...                          ################################# [100%]
Updating / installing...
   1:puppetlabs-release-7-11          ################################# [100%]
[root@puppetserver ~]#

In case you're setting up Puppet Server on the same server you were running the classic puppet master, make sure to stop the apache and puppet master services on the existing system before starting puppet server.

Install

Install the Puppet Server using the below command. This command will install puppetserver (don't confuse with puppet-server) along with all it's dependancies. The dependancy packages are listed below as well. This gives an idea of the dependancies.

yum install puppetserver

[root@puppetserver ~]# yum install puppetserver
Loaded plugins: fastestmirror
Loading mirror speeds from cached hostfile
 * base: centos.excellmedia.net
 * extras: centos.excellmedia.net
 * updates: centos.excellmedia.net
Resolving Dependencies
--> Running transaction check
---> Package puppetserver.noarch 0:1.0.2-1.el7 will be installed
--> Processing Dependency: puppet < 4.0.0 for package: puppetserver-1.0.2-1.el7.noarch
--> Processing Dependency: puppet >= 3.7.3 for package: puppetserver-1.0.2-1.el7.noarch
--> Processing Dependency: java-1.7.0-openjdk for package: puppetserver-1.0.2-1.el7.noarch
--> Running transaction check
---> Package java-1.7.0-openjdk.x86_64 1:1.7.0.75-2.5.4.2.el7_0 will be installed
--> Processing Dependency: java-1.7.0-openjdk-headless = 1:1.7.0.75-2.5.4.2.el7_0 for package: 1:java-1.7.0-openjdk-1.7.0.75-2.5.4.2.el7_0.x86_64
--> Processing Dependency: xorg-x11-fonts-Type1 for package: 1:java-1.7.0-openjdk-1.7.0.75-2.5.4.2.el7_0.x86_64
--> Processing Dependency: libpulse.so.0(PULSE_0)(64bit) for package: 1:java-1.7.0-openjdk-1.7.0.75-2.5.4.2.el7_0.x86_64
--> Processing Dependency: libpng15.so.15(PNG15_0)(64bit) for package: 1:java-1.7.0-openjdk-1.7.0.75-2.5.4.2.el7_0.x86_64
--> Processing Dependency: libjvm.so(SUNWprivate_1.1)(64bit) for package: 1:java-1.7.0-openjdk-1.7.0.75-2.5.4.2.el7_0.x86_64
--> Processing Dependency: libjpeg.so.62(LIBJPEG_6.2)(64bit) for package: 1:java-1.7.0-openjdk-1.7.0.75-2.5.4.2.el7_0.x86_64
--> Processing Dependency: libjava.so(SUNWprivate_1.1)(64bit) for package: 1:java-1.7.0-openjdk-1.7.0.75-2.5.4.2.el7_0.x86_64
--> Processing Dependency: fontconfig for package: 1:java-1.7.0-openjdk-1.7.0.75-2.5.4.2.el7_0.x86_64
--> Processing Dependency: libpulse.so.0()(64bit) for package: 1:java-1.7.0-openjdk-1.7.0.75-2.5.4.2.el7_0.x86_64
--> Processing Dependency: libpng15.so.15()(64bit) for package: 1:java-1.7.0-openjdk-1.7.0.75-2.5.4.2.el7_0.x86_64
--> Processing Dependency: libjvm.so()(64bit) for package: 1:java-1.7.0-openjdk-1.7.0.75-2.5.4.2.el7_0.x86_64
--> Processing Dependency: libjpeg.so.62()(64bit) for package: 1:java-1.7.0-openjdk-1.7.0.75-2.5.4.2.el7_0.x86_64
--> Processing Dependency: libjava.so()(64bit) for package: 1:java-1.7.0-openjdk-1.7.0.75-2.5.4.2.el7_0.x86_64
--> Processing Dependency: libgif.so.4()(64bit) for package: 1:java-1.7.0-openjdk-1.7.0.75-2.5.4.2.el7_0.x86_64
--> Processing Dependency: libfontconfig.so.1()(64bit) for package: 1:java-1.7.0-openjdk-1.7.0.75-2.5.4.2.el7_0.x86_64
--> Processing Dependency: libcups.so.2()(64bit) for package: 1:java-1.7.0-openjdk-1.7.0.75-2.5.4.2.el7_0.x86_64
--> Processing Dependency: libawt.so()(64bit) for package: 1:java-1.7.0-openjdk-1.7.0.75-2.5.4.2.el7_0.x86_64
--> Processing Dependency: libXtst.so.6()(64bit) for package: 1:java-1.7.0-openjdk-1.7.0.75-2.5.4.2.el7_0.x86_64
--> Processing Dependency: libXrender.so.1()(64bit) for package: 1:java-1.7.0-openjdk-1.7.0.75-2.5.4.2.el7_0.x86_64
--> Processing Dependency: libXi.so.6()(64bit) for package: 1:java-1.7.0-openjdk-1.7.0.75-2.5.4.2.el7_0.x86_64
--> Processing Dependency: libXext.so.6()(64bit) for package: 1:java-1.7.0-openjdk-1.7.0.75-2.5.4.2.el7_0.x86_64
--> Processing Dependency: libX11.so.6()(64bit) for package: 1:java-1.7.0-openjdk-1.7.0.75-2.5.4.2.el7_0.x86_64
---> Package puppet.noarch 0:3.7.4-1.el7 will be installed
--> Processing Dependency: ruby >= 1.8.7 for package: puppet-3.7.4-1.el7.noarch
--> Processing Dependency: facter >= 1:1.7.0 for package: puppet-3.7.4-1.el7.noarch
--> Processing Dependency: hiera >= 1.0.0 for package: puppet-3.7.4-1.el7.noarch
--> Processing Dependency: ruby >= 1.8 for package: puppet-3.7.4-1.el7.noarch
--> Processing Dependency: ruby-shadow for package: puppet-3.7.4-1.el7.noarch
--> Processing Dependency: /usr/bin/ruby for package: puppet-3.7.4-1.el7.noarch
--> Processing Dependency: ruby(selinux) for package: puppet-3.7.4-1.el7.noarch
--> Processing Dependency: ruby-augeas for package: puppet-3.7.4-1.el7.noarch
--> Processing Dependency: rubygem-json for package: puppet-3.7.4-1.el7.noarch
--> Running transaction check
---> Package cups-libs.x86_64 1:1.6.3-14.el7 will be installed
---> Package facter.x86_64 1:2.4.1-1.el7 will be installed
--> Processing Dependency: pciutils for package: 1:facter-2.4.1-1.el7.x86_64
---> Package fontconfig.x86_64 0:2.10.95-7.el7 will be installed
--> Processing Dependency: fontpackages-filesystem for package: fontconfig-2.10.95-7.el7.x86_64
---> Package giflib.x86_64 0:4.1.6-9.el7 will be installed
--> Processing Dependency: libSM.so.6()(64bit) for package: giflib-4.1.6-9.el7.x86_64
--> Processing Dependency: libICE.so.6()(64bit) for package: giflib-4.1.6-9.el7.x86_64
---> Package hiera.noarch 0:1.3.4-1.el7 will be installed
---> Package java-1.7.0-openjdk-headless.x86_64 1:1.7.0.75-2.5.4.2.el7_0 will be installed
--> Processing Dependency: lcms2 >= 2.5 for package: 1:java-1.7.0-openjdk-headless-1.7.0.75-2.5.4.2.el7_0.x86_64
--> Processing Dependency: jpackage-utils >= 1.7.3-1jpp.2 for package: 1:java-1.7.0-openjdk-headless-1.7.0.75-2.5.4.2.el7_0.x86_64
--> Processing Dependency: tzdata-java for package: 1:java-1.7.0-openjdk-headless-1.7.0.75-2.5.4.2.el7_0.x86_64
--> Processing Dependency: libpangoft2-1.0.so.0()(64bit) for package: 1:java-1.7.0-openjdk-headless-1.7.0.75-2.5.4.2.el7_0.x86_64
--> Processing Dependency: libpangocairo-1.0.so.0()(64bit) for package: 1:java-1.7.0-openjdk-headless-1.7.0.75-2.5.4.2.el7_0.x86_64
--> Processing Dependency: libpango-1.0.so.0()(64bit) for package: 1:java-1.7.0-openjdk-headless-1.7.0.75-2.5.4.2.el7_0.x86_64
--> Processing Dependency: liblcms2.so.2()(64bit) for package: 1:java-1.7.0-openjdk-headless-1.7.0.75-2.5.4.2.el7_0.x86_64
--> Processing Dependency: libgtk-x11-2.0.so.0()(64bit) for package: 1:java-1.7.0-openjdk-headless-1.7.0.75-2.5.4.2.el7_0.x86_64
--> Processing Dependency: libgdk_pixbuf-2.0.so.0()(64bit) for package: 1:java-1.7.0-openjdk-headless-1.7.0.75-2.5.4.2.el7_0.x86_64
--> Processing Dependency: libgdk-x11-2.0.so.0()(64bit) for package: 1:java-1.7.0-openjdk-headless-1.7.0.75-2.5.4.2.el7_0.x86_64
--> Processing Dependency: libcairo.so.2()(64bit) for package: 1:java-1.7.0-openjdk-headless-1.7.0.75-2.5.4.2.el7_0.x86_64
--> Processing Dependency: libatk-1.0.so.0()(64bit) for package: 1:java-1.7.0-openjdk-headless-1.7.0.75-2.5.4.2.el7_0.x86_64
---> Package libX11.x86_64 0:1.6.0-2.1.el7 will be installed
--> Processing Dependency: libX11-common = 1.6.0-2.1.el7 for package: libX11-1.6.0-2.1.el7.x86_64
--> Processing Dependency: libxcb.so.1()(64bit) for package: libX11-1.6.0-2.1.el7.x86_64
---> Package libXext.x86_64 0:1.3.2-2.1.el7 will be installed
---> Package libXi.x86_64 0:1.7.2-2.1.el7 will be installed
---> Package libXrender.x86_64 0:0.9.8-2.1.el7 will be installed
---> Package libXtst.x86_64 0:1.2.2-2.1.el7 will be installed
---> Package libjpeg-turbo.x86_64 0:1.2.90-5.el7 will be installed
---> Package libpng.x86_64 2:1.5.13-5.el7 will be installed
---> Package libselinux-ruby.x86_64 0:2.2.2-6.el7 will be installed
---> Package pulseaudio-libs.x86_64 0:3.0-22.el7 will be installed
--> Processing Dependency: libsndfile.so.1(libsndfile.so.1.0)(64bit) for package: pulseaudio-libs-3.0-22.el7.x86_64
--> Processing Dependency: libsndfile.so.1()(64bit) for package: pulseaudio-libs-3.0-22.el7.x86_64
--> Processing Dependency: libasyncns.so.0()(64bit) for package: pulseaudio-libs-3.0-22.el7.x86_64
---> Package ruby.x86_64 0:2.0.0.353-22.el7_0 will be installed
--> Processing Dependency: ruby-libs(x86-64) = 2.0.0.353-22.el7_0 for package: ruby-2.0.0.353-22.el7_0.x86_64
--> Processing Dependency: rubygem(bigdecimal) >= 1.2.0 for package: ruby-2.0.0.353-22.el7_0.x86_64
--> Processing Dependency: ruby(rubygems) >= 2.0.14 for package: ruby-2.0.0.353-22.el7_0.x86_64
--> Processing Dependency: libruby.so.2.0()(64bit) for package: ruby-2.0.0.353-22.el7_0.x86_64
---> Package ruby-augeas.x86_64 0:0.4.1-3.el7 will be installed
--> Processing Dependency: augeas-libs >= 0.8.0 for package: ruby-augeas-0.4.1-3.el7.x86_64
--> Processing Dependency: libaugeas.so.0(AUGEAS_0.8.0)(64bit) for package: ruby-augeas-0.4.1-3.el7.x86_64
--> Processing Dependency: libaugeas.so.0(AUGEAS_0.1.0)(64bit) for package: ruby-augeas-0.4.1-3.el7.x86_64
--> Processing Dependency: libaugeas.so.0(AUGEAS_0.12.0)(64bit) for package: ruby-augeas-0.4.1-3.el7.x86_64
--> Processing Dependency: libaugeas.so.0(AUGEAS_0.10.0)(64bit) for package: ruby-augeas-0.4.1-3.el7.x86_64
--> Processing Dependency: libaugeas.so.0(AUGEAS_0.11.0)(64bit) for package: ruby-augeas-0.4.1-3.el7.x86_64
--> Processing Dependency: libaugeas.so.0()(64bit) for package: ruby-augeas-0.4.1-3.el7.x86_64
---> Package ruby-shadow.x86_64 1:2.2.0-2.el7 will be installed
---> Package rubygem-json.x86_64 0:1.7.7-22.el7_0 will be installed
---> Package xorg-x11-fonts-Type1.noarch 0:7.5-9.el7 will be installed
--> Processing Dependency: ttmkfdir for package: xorg-x11-fonts-Type1-7.5-9.el7.noarch
--> Processing Dependency: ttmkfdir for package: xorg-x11-fonts-Type1-7.5-9.el7.noarch
--> Processing Dependency: mkfontdir for package: xorg-x11-fonts-Type1-7.5-9.el7.noarch
--> Processing Dependency: mkfontdir for package: xorg-x11-fonts-Type1-7.5-9.el7.noarch
--> Running transaction check
---> Package atk.x86_64 0:2.8.0-4.el7 will be installed
---> Package augeas-libs.x86_64 0:1.1.0-12.el7_0.1 will be installed
---> Package cairo.x86_64 0:1.12.14-6.el7 will be installed
--> Processing Dependency: libpixman-1.so.0()(64bit) for package: cairo-1.12.14-6.el7.x86_64
--> Processing Dependency: libGL.so.1()(64bit) for package: cairo-1.12.14-6.el7.x86_64
--> Processing Dependency: libEGL.so.1()(64bit) for package: cairo-1.12.14-6.el7.x86_64
---> Package fontpackages-filesystem.noarch 0:1.44-8.el7 will be installed
---> Package gdk-pixbuf2.x86_64 0:2.28.2-4.el7 will be installed
--> Processing Dependency: libtiff.so.5(LIBTIFF_4.0)(64bit) for package: gdk-pixbuf2-2.28.2-4.el7.x86_64
--> Processing Dependency: libtiff.so.5()(64bit) for package: gdk-pixbuf2-2.28.2-4.el7.x86_64
--> Processing Dependency: libjasper.so.1()(64bit) for package: gdk-pixbuf2-2.28.2-4.el7.x86_64
---> Package gtk2.x86_64 0:2.24.22-5.el7_0.1 will be installed
--> Processing Dependency: libXrandr >= 1.2.99.4-2 for package: gtk2-2.24.22-5.el7_0.1.x86_64
--> Processing Dependency: hicolor-icon-theme for package: gtk2-2.24.22-5.el7_0.1.x86_64
--> Processing Dependency: libXrandr.so.2()(64bit) for package: gtk2-2.24.22-5.el7_0.1.x86_64
--> Processing Dependency: libXinerama.so.1()(64bit) for package: gtk2-2.24.22-5.el7_0.1.x86_64
--> Processing Dependency: libXfixes.so.3()(64bit) for package: gtk2-2.24.22-5.el7_0.1.x86_64
--> Processing Dependency: libXdamage.so.1()(64bit) for package: gtk2-2.24.22-5.el7_0.1.x86_64
--> Processing Dependency: libXcursor.so.1()(64bit) for package: gtk2-2.24.22-5.el7_0.1.x86_64
--> Processing Dependency: libXcomposite.so.1()(64bit) for package: gtk2-2.24.22-5.el7_0.1.x86_64
---> Package javapackages-tools.noarch 0:3.4.1-6.el7_0 will be installed
--> Processing Dependency: python-javapackages = 3.4.1-6.el7_0 for package: javapackages-tools-3.4.1-6.el7_0.noarch
--> Processing Dependency: libxslt for package: javapackages-tools-3.4.1-6.el7_0.noarch
---> Package lcms2.x86_64 0:2.5-4.el7 will be installed
---> Package libICE.x86_64 0:1.0.8-7.el7 will be installed
---> Package libSM.x86_64 0:1.2.1-7.el7 will be installed
---> Package libX11-common.noarch 0:1.6.0-2.1.el7 will be installed
---> Package libasyncns.x86_64 0:0.8-7.el7 will be installed
---> Package libsndfile.x86_64 0:1.0.25-9.el7 will be installed
--> Processing Dependency: libvorbisenc.so.2()(64bit) for package: libsndfile-1.0.25-9.el7.x86_64
--> Processing Dependency: libvorbis.so.0()(64bit) for package: libsndfile-1.0.25-9.el7.x86_64
--> Processing Dependency: libogg.so.0()(64bit) for package: libsndfile-1.0.25-9.el7.x86_64
--> Processing Dependency: libgsm.so.1()(64bit) for package: libsndfile-1.0.25-9.el7.x86_64
--> Processing Dependency: libFLAC.so.8()(64bit) for package: libsndfile-1.0.25-9.el7.x86_64
---> Package libxcb.x86_64 0:1.9-5.el7 will be installed
--> Processing Dependency: libXau.so.6()(64bit) for package: libxcb-1.9-5.el7.x86_64
---> Package pango.x86_64 0:1.34.1-5.el7 will be installed
--> Processing Dependency: libthai >= 0.1.9 for package: pango-1.34.1-5.el7.x86_64
--> Processing Dependency: libthai.so.0(LIBTHAI_0.1)(64bit) for package: pango-1.34.1-5.el7.x86_64
--> Processing Dependency: libthai.so.0()(64bit) for package: pango-1.34.1-5.el7.x86_64
--> Processing Dependency: libharfbuzz.so.0()(64bit) for package: pango-1.34.1-5.el7.x86_64
--> Processing Dependency: libXft.so.2()(64bit) for package: pango-1.34.1-5.el7.x86_64
---> Package pciutils.x86_64 0:3.2.1-4.el7 will be installed
---> Package ruby-libs.x86_64 0:2.0.0.353-22.el7_0 will be installed
---> Package rubygem-bigdecimal.x86_64 0:1.2.0-22.el7_0 will be installed
---> Package rubygems.noarch 0:2.0.14-22.el7_0 will be installed
--> Processing Dependency: rubygem(rdoc) >= 4.0.0 for package: rubygems-2.0.14-22.el7_0.noarch
--> Processing Dependency: rubygem(psych) >= 2.0.0 for package: rubygems-2.0.14-22.el7_0.noarch
--> Processing Dependency: rubygem(io-console) >= 0.4.2 for package: rubygems-2.0.14-22.el7_0.noarch
---> Package ttmkfdir.x86_64 0:3.0.9-41.el7 will be installed
---> Package tzdata-java.noarch 0:2015a-1.el7_0 will be installed
---> Package xorg-x11-font-utils.x86_64 1:7.5-18.1.el7 will be installed
--> Processing Dependency: libfontenc.so.1()(64bit) for package: 1:xorg-x11-font-utils-7.5-18.1.el7.x86_64
--> Processing Dependency: libXfont.so.1()(64bit) for package: 1:xorg-x11-font-utils-7.5-18.1.el7.x86_64
--> Running transaction check
---> Package flac-libs.x86_64 0:1.3.0-4.el7 will be installed
---> Package gsm.x86_64 0:1.0.13-11.el7 will be installed
---> Package harfbuzz.x86_64 0:0.9.20-3.el7 will be installed
--> Processing Dependency: libgraphite2.so.3()(64bit) for package: harfbuzz-0.9.20-3.el7.x86_64
---> Package hicolor-icon-theme.noarch 0:0.12-7.el7 will be installed
---> Package jasper-libs.x86_64 0:1.900.1-26.el7_0.3 will be installed
---> Package libXau.x86_64 0:1.0.8-2.1.el7 will be installed
---> Package libXcomposite.x86_64 0:0.4.4-4.1.el7 will be installed
---> Package libXcursor.x86_64 0:1.1.14-2.1.el7 will be installed
---> Package libXdamage.x86_64 0:1.1.4-4.1.el7 will be installed
---> Package libXfixes.x86_64 0:5.0.1-2.1.el7 will be installed
---> Package libXfont.x86_64 0:1.4.7-2.el7_0 will be installed
---> Package libXft.x86_64 0:2.3.1-5.1.el7 will be installed
---> Package libXinerama.x86_64 0:1.1.3-2.1.el7 will be installed
---> Package libXrandr.x86_64 0:1.4.1-2.1.el7 will be installed
---> Package libfontenc.x86_64 0:1.1.1-5.el7 will be installed
---> Package libogg.x86_64 2:1.3.0-7.el7 will be installed
---> Package libthai.x86_64 0:0.1.14-9.el7 will be installed
---> Package libtiff.x86_64 0:4.0.3-14.el7 will be installed
--> Processing Dependency: libjbig.so.2.0()(64bit) for package: libtiff-4.0.3-14.el7.x86_64
---> Package libvorbis.x86_64 1:1.3.3-8.el7 will be installed
---> Package libxslt.x86_64 0:1.1.28-5.el7 will be installed
---> Package mesa-libEGL.x86_64 0:9.2.5-6.20131218.el7_0 will be installed
--> Processing Dependency: mesa-libgbm = 9.2.5-6.20131218.el7_0 for package: mesa-libEGL-9.2.5-6.20131218.el7_0.x86_64
--> Processing Dependency: libglapi.so.0()(64bit) for package: mesa-libEGL-9.2.5-6.20131218.el7_0.x86_64
--> Processing Dependency: libgbm.so.1()(64bit) for package: mesa-libEGL-9.2.5-6.20131218.el7_0.x86_64
---> Package mesa-libGL.x86_64 0:9.2.5-6.20131218.el7_0 will be installed
--> Processing Dependency: libXxf86vm.so.1()(64bit) for package: mesa-libGL-9.2.5-6.20131218.el7_0.x86_64
---> Package pixman.x86_64 0:0.32.4-3.el7 will be installed
---> Package python-javapackages.noarch 0:3.4.1-6.el7_0 will be installed
--> Processing Dependency: python-lxml for package: python-javapackages-3.4.1-6.el7_0.noarch
---> Package rubygem-io-console.x86_64 0:0.4.2-22.el7_0 will be installed
---> Package rubygem-psych.x86_64 0:2.0.0-22.el7_0 will be installed
--> Processing Dependency: libyaml-0.so.2()(64bit) for package: rubygem-psych-2.0.0-22.el7_0.x86_64
---> Package rubygem-rdoc.noarch 0:4.0.0-22.el7_0 will be installed
--> Processing Dependency: ruby(irb) = 2.0.0.353 for package: rubygem-rdoc-4.0.0-22.el7_0.noarch
--> Running transaction check
---> Package graphite2.x86_64 0:1.2.2-5.el7 will be installed
---> Package jbigkit-libs.x86_64 0:2.0-11.el7 will be installed
---> Package libXxf86vm.x86_64 0:1.1.3-2.1.el7 will be installed
---> Package libyaml.x86_64 0:0.1.4-11.el7_0 will be installed
---> Package mesa-libgbm.x86_64 0:9.2.5-6.20131218.el7_0 will be installed
---> Package mesa-libglapi.x86_64 0:9.2.5-6.20131218.el7_0 will be installed
---> Package python-lxml.x86_64 0:3.2.1-4.el7 will be installed
---> Package ruby-irb.noarch 0:2.0.0.353-22.el7_0 will be installed
--> Finished Dependency Resolution

Dependencies Resolved

=============================================================================================================================================================
 Package                                      Arch                    Version                                     Repository                            Size
=============================================================================================================================================================
Installing:
 puppetserver                                 noarch                  1.0.2-1.el7                                 puppetlabs-products                   29 M
Installing for dependencies:
 atk                                          x86_64                  2.8.0-4.el7                                 base                                 233 k
 augeas-libs                                  x86_64                  1.1.0-12.el7_0.1                            updates                              327 k
 cairo                                        x86_64                  1.12.14-6.el7                               base                                 697 k
 cups-libs                                    x86_64                  1:1.6.3-14.el7                              base                                 352 k
 facter                                       x86_64                  1:2.4.1-1.el7                               puppetlabs-products                   98 k
 flac-libs                                    x86_64                  1.3.0-4.el7                                 base                                 169 k
 fontconfig                                   x86_64                  2.10.95-7.el7                               base                                 228 k
 fontpackages-filesystem                      noarch                  1.44-8.el7                                  base                                 9.9 k
 gdk-pixbuf2                                  x86_64                  2.28.2-4.el7                                base                                 533 k
 giflib                                       x86_64                  4.1.6-9.el7                                 base                                  40 k
 graphite2                                    x86_64                  1.2.2-5.el7                                 base                                  81 k
 gsm                                          x86_64                  1.0.13-11.el7                               base                                  30 k
 gtk2                                         x86_64                  2.24.22-5.el7_0.1                           updates                              3.4 M
 harfbuzz                                     x86_64                  0.9.20-3.el7                                base                                 144 k
 hicolor-icon-theme                           noarch                  0.12-7.el7                                  base                                  42 k
 hiera                                        noarch                  1.3.4-1.el7                                 puppetlabs-products                   23 k
 jasper-libs                                  x86_64                  1.900.1-26.el7_0.3                          updates                              149 k
 java-1.7.0-openjdk                           x86_64                  1:1.7.0.75-2.5.4.2.el7_0                    updates                              197 k
 java-1.7.0-openjdk-headless                  x86_64                  1:1.7.0.75-2.5.4.2.el7_0                    updates                               25 M
 javapackages-tools                           noarch                  3.4.1-6.el7_0                               updates                               72 k
 jbigkit-libs                                 x86_64                  2.0-11.el7                                  base                                  46 k
 lcms2                                        x86_64                  2.5-4.el7                                   base                                 133 k
 libICE                                       x86_64                  1.0.8-7.el7                                 base                                  63 k
 libSM                                        x86_64                  1.2.1-7.el7                                 base                                  38 k
 libX11                                       x86_64                  1.6.0-2.1.el7                               base                                 605 k
 libX11-common                                noarch                  1.6.0-2.1.el7                               base                                 181 k
 libXau                                       x86_64                  1.0.8-2.1.el7                               base                                  29 k
 libXcomposite                                x86_64                  0.4.4-4.1.el7                               base                                  22 k
 libXcursor                                   x86_64                  1.1.14-2.1.el7                              base                                  30 k
 libXdamage                                   x86_64                  1.1.4-4.1.el7                               base                                  20 k
 libXext                                      x86_64                  1.3.2-2.1.el7                               base                                  38 k
 libXfixes                                    x86_64                  5.0.1-2.1.el7                               base                                  18 k
 libXfont                                     x86_64                  1.4.7-2.el7_0                               updates                              144 k
 libXft                                       x86_64                  2.3.1-5.1.el7                               base                                  57 k
 libXi                                        x86_64                  1.7.2-2.1.el7                               base                                  39 k
 libXinerama                                  x86_64                  1.1.3-2.1.el7                               base                                  14 k
 libXrandr                                    x86_64                  1.4.1-2.1.el7                               base                                  25 k
 libXrender                                   x86_64                  0.9.8-2.1.el7                               base                                  25 k
 libXtst                                      x86_64                  1.2.2-2.1.el7                               base                                  20 k
 libXxf86vm                                   x86_64                  1.1.3-2.1.el7                               base                                  17 k
 libasyncns                                   x86_64                  0.8-7.el7                                   base                                  26 k
 libfontenc                                   x86_64                  1.1.1-5.el7                                 base                                  29 k
 libjpeg-turbo                                x86_64                  1.2.90-5.el7                                base                                 134 k
 libogg                                       x86_64                  2:1.3.0-7.el7                               base                                  24 k
 libpng                                       x86_64                  2:1.5.13-5.el7                              base                                 212 k
 libselinux-ruby                              x86_64                  2.2.2-6.el7                                 base                                 127 k
 libsndfile                                   x86_64                  1.0.25-9.el7                                base                                 149 k
 libthai                                      x86_64                  0.1.14-9.el7                                base                                 187 k
 libtiff                                      x86_64                  4.0.3-14.el7                                base                                 167 k
 libvorbis                                    x86_64                  1:1.3.3-8.el7                               base                                 204 k
 libxcb                                       x86_64                  1.9-5.el7                                   base                                 169 k
 libxslt                                      x86_64                  1.1.28-5.el7                                base                                 242 k
 libyaml                                      x86_64                  0.1.4-11.el7_0                              updates                               55 k
 mesa-libEGL                                  x86_64                  9.2.5-6.20131218.el7_0                      updates                               69 k
 mesa-libGL                                   x86_64                  9.2.5-6.20131218.el7_0                      updates                              142 k
 mesa-libgbm                                  x86_64                  9.2.5-6.20131218.el7_0                      updates                               30 k
 mesa-libglapi                                x86_64                  9.2.5-6.20131218.el7_0                      updates                               34 k
 pango                                        x86_64                  1.34.1-5.el7                                base                                 283 k
 pciutils                                     x86_64                  3.2.1-4.el7                                 base                                  90 k
 pixman                                       x86_64                  0.32.4-3.el7                                base                                 254 k
 pulseaudio-libs                              x86_64                  3.0-22.el7                                  base                                 555 k
 puppet                                       noarch                  3.7.4-1.el7                                 puppetlabs-products                  1.5 M
 python-javapackages                          noarch                  3.4.1-6.el7_0                               updates                               31 k
 python-lxml                                  x86_64                  3.2.1-4.el7                                 base                                 758 k
 ruby                                         x86_64                  2.0.0.353-22.el7_0                          updates                               66 k
 ruby-augeas                                  x86_64                  0.4.1-3.el7                                 puppetlabs-deps                       22 k
 ruby-irb                                     noarch                  2.0.0.353-22.el7_0                          updates                               87 k
 ruby-libs                                    x86_64                  2.0.0.353-22.el7_0                          updates                              2.8 M
 ruby-shadow                                  x86_64                  1:2.2.0-2.el7                               puppetlabs-deps                       14 k
 rubygem-bigdecimal                           x86_64                  1.2.0-22.el7_0                              updates                               78 k
 rubygem-io-console                           x86_64                  0.4.2-22.el7_0                              updates                               49 k
 rubygem-json                                 x86_64                  1.7.7-22.el7_0                              updates                               74 k
 rubygem-psych                                x86_64                  2.0.0-22.el7_0                              updates                               76 k
 rubygem-rdoc                                 noarch                  4.0.0-22.el7_0                              updates                              317 k
 rubygems                                     noarch                  2.0.14-22.el7_0                             updates                              211 k
 ttmkfdir                                     x86_64                  3.0.9-41.el7                                base                                  47 k
 tzdata-java                                  noarch                  2015a-1.el7_0                               updates                              143 k
 xorg-x11-font-utils                          x86_64                  1:7.5-18.1.el7                              base                                  87 k
 xorg-x11-fonts-Type1                         noarch                  7.5-9.el7                                   base                                 521 k

Transaction Summary
=============================================================================================================================================================
Install  1 Package (+79 Dependent packages)

Total download size: 73 M
Installed size: 178 M
Is this ok [y/d/N]: y
...
...
Complete!
[root@puppetserver ~]#

Now, Puppet Server has been installedsuccessfully with all the requisite packages.  Start the Puppet Server service. Please verify and make sure that enough ram is allocated to the OS and JVM. Centos 7.0 comes with systemd as default i.e. system and service manager for Linux.

service puppetserver start

[root@puppetserver ~]# service puppetserver start
Redirecting to /bin/systemctl start  puppetserver.service
[root@puppetserver ~]# 
[root@puppetserver ~]# ps -ef | grep puppet
puppet   13114     1 99 23:25 ?        00:01:55 java -Xms1g -Xmx1g -XX:MaxPermSize=256m -XX:OnOutOfMemoryError=kill\ -9\ puppetserver -XX:+HeapDumpOnOutOfMemoryError -XX:HeapDumpPath=/var/log/puppetserver -Djava.security.egd=/dev/urandom -cp /usr/share/puppetserver/puppet-server-release.jar clojure.main -m puppetlabs.trapperkeeper.main --config /etc/puppetserver/conf.d -b /etc/puppetserver/bootstrap.cfg
root     13416 13076  0 23:26 pts/2    00:00:00 grep --color=auto puppet
[root@puppetserver ~]#

[root@puppetserver ~]# puppet agent -t
Info: Caching certificate_revocation_list for ca
Info: Retrieving pluginfacts
Info: Retrieving plugin
Info: Caching catalog for puppetserver.puppetlabs.vm
Info: Applying configuration version '1425232646'
Info: Creating state file /var/lib/puppet/state/state.yaml
Notice: Finished catalog run in 0.01 seconds
[root@puppetserver ~]#

Memory Allocation

Puppet Server will be configured to use 2GB of RAM by default. However, while using puppet server in production you should have a good amount of ram assigned to it for optimal performance. The syntax to change the Puppet Server memory allocation is shown below: 

Open /etc/sysconfig/puppetserver and modify these settings

# Modify this if you'd like to change the memory allocation, enable JMX, etc
JAVA_ARGS="-Xms2g -Xmx2g -XX:MaxPermSize=256m"

Replace 2g with the amount of memory you want to allocate to Puppet Server. Remember to restart the puppetserver service after making changes to this file.

Puppet Server: Configuration

Puppet Server honors almost all settings in puppet.conf and should pick them up automatically. However for some tasks such as configuring the webserver or an external Certificate Authority new Puppet Server-specific configuration files and settings should be used.

Config files

All of Puppet Server’s new config files and settings are located in the conf.d directory. Here's the tree structure show below. The modules and manifests directory location is /etc/puppet.

[root@puppetserver puppetserver]# pwd
/etc/puppetserver
[root@puppetserver puppetserver]# tree
.
|-- bootstrap.cfg
|-- conf.d
|   |-- ca.conf
|   |-- global.conf
|   |-- os-settings.conf
|   |-- puppetserver.conf
|   |-- web-routes.conf
|   `-- webserver.conf
`-- logback.xml

1 directory, 8 files
[root@puppetserver puppetserver]# 

Puppet Server reads all the .conf files located in directory /etc/puppetserver/conf.d at startup.

global.conf

This file contains global configuration settings for Puppet Server. You shouldn’t typically need to make changes to this file. However, you can change the logging-config path for the logback logging configuration file if necessary.  

global: {
  logging-config: /etc/puppetlabs/puppetserver/logback.xml
}

webserver.conf

This file contains the web server configuration settings. The webserver.conf file looks something like this: 

[root@puppetserver conf.d]# cat webserver.conf 
webserver: {
    client-auth = want
    ssl-host = 0.0.0.0
    ssl-port = 8140
}
[root@puppetserver conf.d]#

By default, Puppet Server is configured to use the correct Puppet Master and CA certificate.

puppetserver.conf

This file contains the settings for Puppet Server itself.

The jruby-puppet settings configure the interpreter:

  • gem-home: This setting determines where JRuby looks for gems. It is also used by the puppetserver gem command line tool. If not specified, uses the Puppet default /var/lib/puppet/jruby-gems.
  • master-conf-dir: Optionally set the path to the Puppet configuration directory. If not specified it uses the Puppet default /etc/puppet.
  • master-var-dir: Optionally set the path to the Puppet variable directory. If not specified uses the Puppet default /var/lib/puppet.
  • max-active-instances: Optionally set the maximum number of JRuby instances to allow. Defaults to ‘num-cpus+2’.

The profiler settings configure profiling:

  • enabled: if this is set to true it enables profiling for the Puppet Ruby code. Defaults to false.

The puppet-admin section configures the Puppet Server’s administrative API.

  • authorization-required determines whether a client certificate is required to access the endpoints in this API. If set to false, the client-whitelist will be ignored. Defaults to true.
  • client-whitelist contains a list of client certnames that are whitelisted to access the admin API. Any requests made to this endpoint that do not present a valid client cert mentioned in this list will be denied access.

ca.conf

This file contains settings for the Certificate Authority service.

  • certificate-status contains settings for the certificate_status HTTP endpoint. This endpoint allows certs to be signed, revoked, and deleted via HTTP requests. This provides full control over Puppet’s security, and access should almost always be heavily restricted.
    # CA-related settings
    certificate-authority: {
        certificate-status: {
            authorization-required: true
            client-whitelist: []
        }
    }

os-settings.conf

This file is set up by packaging and is used to initialize the Ruby load paths for JRuby. The only setting in this file is ruby-load-path.
The Ruby load path defaults to the directory where Puppet is installed. In this release this directory varies depending on what OS you are using.

[root@puppetserver conf.d]# cat os-settings.conf 
os-settings: {
    ruby-load-path: [/usr/share/ruby/vendor_ruby/]
}
[root@puppetserver conf.d]#

logging.conf

All of Puppet Server’s logging is routed through the JVM Logback library. By default, it logs to /var/log/puppetserver/puppetserver.log By default Puppet Server sends nothing to syslog.
The default Logback configuration file is at /etc/puppetserver/logback.xml

Most of the puppet server configuration setting should be used as default and  can be modified when required.

Performance

Puppet Server is fast. It has 3x performance improvement over puppet master. It means that an individaul Puppet Server can handle a much larger volume of puppet agent nodes. The performance gain will increase as it becomes more and more mature. Now we have to deal with simplified configuration in Puppet Server, rather than managing several discrete packages (Apache, Passenger, Puppet, etc.) with their separate configuration interfaces (Puppet Master).

Conclusion

Puppet Server is very straight forward to install and setup. Along with ease, it provides a huge performance improvement over the classic puppet master setup. Now, a single command does all the things (yum install puppetserver) for you.

After some hands on with the new Puppet Server, you will really feel the difference. You will enjoy.

 

Come meet us at RootConf 2016 in Bangalore this week!

$
0
0
Conferences are some of the greatest ways to connect with your peers and to soak up new knowledge. However, great conferences are not easy to find. Especially not Open Source-focused conferences. Well, I have good news today: the great RootConf conference iscoming up this week in Bangalore, and you can come and meet the OlinData India team and myself there!
 

Database BOF

On Thursday, I'll partake in a Birds-Of-a-Feather session about open source databases together with Colin Charles, Srihari Sriraman and Raj Sekhar. This promises to be interesting as some of us come from a mysql background where others prefer postgres. We'll do our best to keep away from the us-vs-them stuff and focus on a productive session instead.

Puppet session

On Friday I'll present a session with some experiences from a project we did with OlinData last year for a large telco in the Netherlands. Lots of conclusions, explanations and a bit of fun here.

Puppet workshop

On Saturday our expert puppet trainer Kaustubh Chaudhari will present a workshop together with me. We'll teach you all you need to know about running puppet in a larger production deployment. 
 
Of course, outside of these events we'll be present as much as possible. Just look out for OlinData shirts, we'll be running around :) Tweet, facebook, email send a LinkedIn message if you want to catch up!
 
Viewing all 32 articles
Browse latest View live