I’m just going to go ahead and blame @dougbtv
for all my awesome and terrible ideas. We’ve been working on several
Ansible playbooks to spin up development
environments; like
kucean.
Due to the rapid development nature of things like Kubernetes, Heketi,
GlusterFS, and other tools, it’s both possible and probable that our playbooks
could become broken at any given time. We’ve been wanting to get some continous
integration spun up to test this with Zuul v3
but the learning curve for that is a bit more than we’d prefer to
tackle for some simple periodic runs. Same goes for Jenkins
or any other number of continous integration software bits.
Enter the brilliantly mad mind of @dougbtv. He wondered if AWX (Ansible Tower)
could be turned into a sort of “Poor Man’s CI”? Hold my beer. Challenge
accepted!
Recently I’ve been playing around with AWX (the upstream, open source code base
of Ansible Tower), and wanted to make it easy to deploy. Standing on the
shoulders of giants (namely @geerlingguy)
I built out a wrapper playbook that would let me easily deploy AWX into a VM on
an OpenStack cloud (in my case, the RDO Cloud). In this blog post, I’ll show
you the wrapper playbook I built, and how to consume it to deploy a development
AWX environment.
It’s been a while since I had the original vision of how storage might work
with Kubernetes. I had seen a project called Heketi that helped to make
GlusterFS live inside the Kubernetes infrastructure itself. I wasn’t entirely
convinced on this approach because I wasn’t necessarily comfortable with
Kubernetes managing its own storage infrastructure. This is the story about how
wrong I was.
In this scene I’ll explore some of the bootstrapping I’ve been working on for a
while that will result in a clean, shiny new Bifrost deployment, populated with
inventory, executed from your laptop to a virtual machine.
Bifrost is an OpenStack project that utilizes OpenStack Ironic to provision
baremetal nodes. This is related to my previous post on Building the virtual
Cobbler deployment.
In scene 1b, we’ll continue with our work from the Building the virtual
Cobbler deployment and
get a kickstart file loaded into Cobbler. I’m going to be mostly reviewing
the kickstart file itself, and not really getting into how to manage the
Cobbler process itself (that’s left as an exercise for the reader).
In this scene I’ll discuss how I’ve built out a local Cobbler deployment into
my virtual host in order to bootstrap the operating system onto my baremetal
nodes via kickstart files and PXE booting.
Edit 2017-08-09: Updated diagram 1-1 to a graphic showing the entire lab
physical topology
The yakLab is a place where yaks are electronically instantiated for the
purpose of learning and documenting. The lab consists of a virtualization host
(virthost) which has 64GB of memory and hosts all the virtual machines,
primarily for infrastructure.
Today I went down a yak shaving path trying to figure out how to get all the
available tags in a fairly complicated plethora of Ansible playbooks and roles.
One of these such situations involves TripleO Quickstart, which is made up of
several different playbooks and repositories of different roles.
I recently had a need to install Python 2.7 on an older CentOS 6 machine since
I wanted to generate some SSL certificates for my web server. On CentOS 6, then
default Python installation is 2.6, which doesn’t seem to work for Let’s
Encrypt.
In this blog post I’ll discuss how I’m currently using TripleO
Quickstart to instantiate a virtual machine on a remote virtual machine
host from my workstation. In follow up blog posts I’ll discuss how to utilize
the virtual machine to provision both virtual and baremetal overclouds from the
virtual machine.