CMake/CPack/CTest/CDash Open Source Tools to Build Test and Deploy C++ Software

Uploaded by GoogleTechTalks on 05.01.2010

Welcome. Thank you for joining us. Today, we have Bill Hoffman from Kitware talking
about the cool work they've been doing on open source build systems for C++. So, without
further ado, Bill. >> HOFFMAN: Thank you. Okay. My name's Bill
Hoffman, I'm from a company called Kitware, and I'm going to be talking about CMake, CPack,
CTest, and CDash. And these are open source tools, anybody can use them to build, test
and deploy software. C++ was the main target but we can do other software as well. So,
here's a quick overview of the talk. The first thing I'm going to talk about is Kitware itself,
just a really short introduction to give you an idea of what the company is and what we
do. And then, I'll get into CMake and the build tools. So, the company was founded in
1998. The founders are five previous employees of GE Corporate Research and Development and
it's a privately held company. We've never had any debt, so we just sort of bootstrapped
the whole company one contract at a time. This year, I think we'll hit about 14 million
in revenue. We do principally consulting and grants with some product revenue. But we're
basically sort of a research out, people can outsource their research and we're really
heavy into open source. We have about 65, 70 employees right now. We've been growing
about 30% every year. There's about 25 PhDs in the company, a lot of masters level, very
developer-heavy. In fact, I think for the first five years, we had a part-time administrative
person and that was pretty much it. I used to run QuickBooks for the company. That was
fun. I was an accountant for a while. Okay. So, what do we do? What do we do? Lots of
hats at a small company. So, what we've evolved into is what's on the screen here. This is
our splashy marketing PowerPoint slide. So we do supercomputing and visualization. So
we do a lot with National Labs: Sandia, Los Alamos; and our roots are really in visualization.
The Visualization ToolKit, VTK, was really the core of the company. It's a C++ toolkit
for developing visualization applications. We've done a lot of work there to create parallel
visualizations so we can do things with render clusters. So they might have a hundred node
GPU available farm, plus a thousand node computation cluster; and we can drive that from a desktop
and do rendering and compositing back in the desktop or even over the web. And then, we've
also got a really strong medical imaging group that does some interesting stuff, doing segmentation
and registration work with medical data. And we've also gotten a little bit into the data
publication area where we can take these large data sets, either supercomputing visualization
data sets, medical data sets; and store them online, get to them with data bases and then
write applications to visualize them. And in the past couple of years, we started a
computer vision group as well. And they mainly work in wide area of persistent surveillance;
so video recognition, tracking people on videos. And really, all this is tied together with
our software process tools that we've created that are open source that I'll be talking
about. So, here's a quick slide of the workflow that we've created. And the idea here is that
we've got developers, they write their code, they check it into their version control system
of the month, get SVN, CBS. The idea is, you know, whatever version control system they
use, and they check it in, it goes into a repository, and then it goes off to a testing
farm that holds the code out of the version control system, and then sends that data to
a web application we call CDash which is PHP, MySQL, LAMP stack application that can display
the build result. And this really came out of GE's Six Sigma push, if you can believe
that. Six Sigma was this effort to--I think Motorola started it where they tried to get
the defects in really large manufacturing issues down to the Six Sigma, the standard
deviations from the norm, so you want really few defects. And if you've got a big company
like GE and you want to push an initiative like this, and you're Jack Welch, you just
say, "Well, everybody has to do it." From the administrative staff on up, everybody
had to take training in this. And we are a--researchers at a research center, we wrote a lot of research
code they could throw away and it's like, "I don't know. How does this fit us?" And--but
some of us really like software and we like software quality, and came up with this idea
of creating--well, what can we measure in software? What sort of defects can we measure
in software? Well, you can measure things like number of build errors, number of build
warnings, number of test failures, test timings. Does it change overtime? Is the software getting
slower or faster? And then, create a dashboard and a way to look at these things. And that's
what this system does; and I'll be talking about these tools. So a quick overview of
that part of the talk; first, I'll talk about building with CMake and what the actual CMake
build tool does and how it works. And then, I'll follow that up with testing with CTest
and CDash. And finally, we've got a packaging tool called CPack which can create installers
from your CMake list files that describe your build and install tree. So, CMake. This is
the CMake section. It's a cross platform build system. And I'll be talking about why you're
going to use CMake. I'll cover some of the cool features it's got and just basics of
how you would use it and how you'd write an input file for it. You can get help; we've
got a book we've published from Kitware called "Mastering CMake." There's a webpage,
There's also a Wiki, and a very active mailing list. If there's any questions, I try to make
sure they're answered. Most of--some of the times, I don't even have to answer them anymore,
there's a growing community. Yeah? >> [INDISTINCT]
>> HOFFMAN: I think so, yeah. Yeah. I think we're seeing a lot. And they can really help
out with the--we've got modules that find third-party libraries. So there's a whole
modular directory for, you know, finding things from libxml to other things. And I really
don't haveů >> [INDISTINCT]
>> HOFFMAN: Yeah. Well, to find software. So CMake--so it's got like a find--finely
installed version of libxml in this machine. And I'll talk about that a little bit more,
but that's--they've been contributing a lot to that because I don't have access to all
the software. I don't use every software in there. There's hundred of packages. And without
the help of the community, that stuff would really not be as good as it is. So, what is
a CMake or why CMake? Okay. It's easy and it works well, that's what I like to say;
a build system that just works and it's easy to use across platforms. So this slide here
shows a typical project, it's actually the cURL Project. It's an open source C thing
for grabbing network activity, you know, FTP, cURL. It's pretty neat package and it's pretty
common. And if you look in the things I have highlighted in red, that's their build system.
So you can see there's a Makefile, and a, and a Makefile.n, and an aclocal, a build
comp, there's an, and the recomp ship, there's even a Visual Studio 6 DSW file,
something called "make TGZ, make installers." And then down here they've got a makefile
for Watcom Borland, AM, NetWare; you get the idea. A lot of stuff and I guarantee you,
at any one point in time, all of those aren't going to be working. What happened with the
project, someone came along and said, "Hey, I'm using CRON with NetWare and I've got this
makefile. Can I contribute it to you?" And the guy is, "Sure, yeah, cool. Well, throw
it in." And this stuff, if it's not tested and not tried everyday, it's probably going
to break. Someone's going to get a new version of that compiler. And in the light blue there,
that's what it requires to build it with CMake. So there's a CMake list file at the top, one
at the bottom. There is a directory that has like, one or two module files, but it's really
small. And what this allows it to--that group to do is outsource this maintenance of all
these different things to the CMake team. So they can not worry about how to generate
a NetWare makefile. If CMake supports it, then the CMake team is going to worry about
it. And the CMake team is going to worry about it by looking at the dashboard to make sure
that it's still supported and if it's--once it's supported, it's always going to be supported
as long as we run it on the dashboard. So, another reason to use CMake; it's fast. We
try to make it as fast as possible. This is a blog I found off the web. It was a Quantum
GIS codebase switched from Autotools to CMake. And their overall build went from about 22
minutes to 12 minutes. I think the large part of the improvement here could be the not using
Libtool, which is essentially a shell script which, on startup, basically has to figure
out everything there is about your platform. And it's got information for all the hundreds
of platforms it supports, and has to trace through all that stuff every single time it
goes to create a library. Whereas something like CMake generates a static makefile once
for the system and then every single library that's built in the system just happens as
fast as possible. Why use CMake? Everyone's using it. In 2006, KDE switched over to using
CMake as their main build system; and the KDE Desktop for Linux. And this is a Google
search trends that I like to show. And you can see, we didn't really even show up on
the map, although the project has been around since about 2000. In Google search trends,
around 2005, 2006 when the KDE guys started using it. People started saying, "Well, what
is this CMake thing?" And we've grown quite a bit. There's about 12,000 downloads a day
from our website; but it's also distributed with Major Linux distributions. Cygwin provides
CMake packages. Some big projects use at KDE. Second Life is using it for their build system.
Boost is using experimentally; we're working on moving that on to their main system, and
lots of other projects. So what is it? It's actually become a family of software development
tools. The first one is CMake, which does the building. And the second one is CTest,
CDash, which does the testing. And finally, there's the CPackaging tool. And it's under
an open source license. It's a BSD style license. The history actually comes from the Insight
Segmentation and Registration Toolkit. This was an NIH funded effort. Is anybody familiar
with the Visible Human Project? No? Yeah? Okay. So, Dan is here. So the Visible Human
Project was a pretty neat project that NIH put on where they took a human body after
they died; and right after death, they CT'd them, MRI'd them, PET Scanned them, every
modality they could--they had at the time; and then they froze the body solid and sliced
it up as thin as they could and took actual pictures. And this gave a really good dataset,
because of you're--if you've ever done anything with segmentation or registration, it's really
hard to get ground truth for this data. If you just look at a CT scan, I mean, you can't
really cut someone open and say, "Did I get it right?" With this dataset, they can. And
they had collected this dataset and that was great for science, but the next step was to
help with the algorithms for that. So they've got this public dataset, then they wanted
to create a public repository for algorithms and it's written in C++. It's got quite a
wide user base now and it really advances the state-of-the-art in studying with that
type of research in registration segmentation. And we were tasked--we were part of the lead
engineering team for that project. And I remember giving a talk early on in the project, I was
presenting my ideas and one of the guys from a pit raised his hand. He says, "Bill, why
are you creating this build thing? It's going to be the ITK build thing. No one's going
to know it, no one, you know--whatů" Before he finished and let me answer it, "Oh, oh,
wait a minute. You're not talking about changing the way that people build ITK. You're talking
about changing the way people build C++." Like, "Yeah that's it. That's what we want
to do." So that was a while ago and we're getting closer to that. So--but now, I'd like
to talk about what I mean by that. So, how do we change the way we build C++ with CMake?
So, I saw a really good talk by David Abrahams. He's one of the lead Boost developers. And
he showed that Boost--just explained what Boost tries to do and that it--and he had
a real neat slide he put up there and he says, "You know, here's Java and here's all the
classes that come with it." There was a network class and there's just tons of classes, you
know, hundreds of real rich feature classes. And here's Python and here's all the really
cool stuff that comes with that. And here's C++, and then we got an IOstream library.
Kind of boring. And then, a lot of extra stuff. So--and even STL was added in sort of at the
last minute. I mean, it became part of the standard. It really wasn't vetted by any community.
And Boost aimed to fix that problem for C++ to give the community a place to try the next
things that are going into the standard. So, when the next C++ standard comes out, the
new innovations from it, well, most of them will come from Boost. So, Boost aims to give
C++ a set of useful libraries. CMake aims to give C++ compile time portability. We can't
do the compiled ones run everywhere, but we can certainly do compile it with one input
file and run it, build it everywhere as easy as possible. And it really makes it easy to
build small--build a small tool that links to a large tool, which is one of our goals
with ITK. So we created this huge C++ library and you want the researcher, some small--some
grad student to be able to pick it up and link it in to his application really easy.
And you can see, so we've got the pictures down on here of all the different platforms.
So, we'd go everything from supercomputers all the way down. I saw a blog of someone
describing how to use CMake with a Nintendo DS. So--and interesting thing, the supercomputer
probably has a lot more in common with programming on a Nintendo DS than the desktops in that
they tend to--a lot of them like the Kraze or something like that have very minimal operating
system, they don't have shared libraries and you have to cross compile to them. So, CMake
has got a pretty good support for cross compiling as well. So, you can write your stuff and
have it build on any platform. So, who's involved? There's a whole bunch of users. There's--KDE
is one of the bigger one, more well-known ones. Second Life is using it; ITK, of course;
VTK, our visualization toolkit; ParaView, numerical package from Sandia Lab is called
Trilinos; Scribus; Boost. MySQL's using it as its build system. Currently using it just
for Windows although I saw something in the list where they're, "Why aren't we using this
on UNIX?" And I think they're moving that way. LLVM, which is the next--are you familiar
with that? You're--yes, that's--so the next big thing coming in C++ is open source compilers.
Apple is working on that and they're using CMake as its--as their build tool. And this
list on the right are the supporters. Kitware, of course, is very interested in keeping CMake
going. Army Research Labs, National Library of Medicine, Sandia National Labs, Los Alamos,
the NAMIC. So the idea to show this slide is just to show that there's support behind
it and there's funding going into it. So, it's not--it's not a stagnant project and
it's not a hobby project. There's probably three or four full-time developers actually
getting paid to work on CMake and the CTest and CDash and the whole tool suite. So, it's
here to stay and it's got some legs. So, the documentation, we've got the book you can
buy. We're working on the next version of the book. It's going to be out in the beginning
of the year. So, webpage and the Wiki and the mailing list, there's full reference documentation
online. It's generated from the code itself. It ships with HTML, man pages and command
line help. There's a nice tutorial that's included in the testing tree test tutorial.
We try to test everything we do, even the tutorials. You can create configured files
with CMake. You can do optional build components. It's got good support for install rules and
doing test properties. You can do system introspection, check for the existing or not programming
to a particular system but more to economical system. You can, instead of writing, you know,
"If Apple" in your code, you can say, "If this feature exists." And then it's also got
this packaging tool called CPack and CTest and CDash. So, some of the features, there's
one simple language for all the platforms. It works on Windows, Mac, Linux, all the UNIX
variants, HPC, high-performance computing or embedded platforms via cross-compilation.
We had--this slide here, we did ParaView which is our visualization tool for high-performance
computing. And we needed to run it on Cray XT5, I believe. And we wrote the cross-compiling
part of CMake for that project. And we--it uses Python. And Python's build tools, we
looked at the build tool, tried to get it to work with the cross-compiler. At the end
of the day, it was easier for the developer working on that to just write CMake files
for Python. And I think he got it done within an afternoon; he had it cross-compiling. It
generates native build systems. So, it's not an actual build tool. It's MetaBuild tool.
Right now, we generate Makefiles for GNU Make, NMake, Borland. We do KDevelop, Eclipse, Visual
Studio 6 and up, including the beta, the 2010, Xcode projects. This next slide's really important,
I think. We handle out-of-source build trees very easily. So, you can just, you know, make
directory build, "CMake../" point it back to the source tree. So, on my laptop, I might
have, you know, three or four compilers installed and I'm running off the same source tree and
doing a build with each of the tools I want to test on. There's interactive configuration
via graphical user interface written in Qt. It supports multiple configurations; Debug,
Release, et cetera. There are built-in rules for the common targets, so creating an executable
is just a real quick command, creating shared libraries or DLLs across platform, creating
static libraries or archives. And we also have good support for the Mac or the Apple.
We can create OS X frameworks or application bundles just for the, you know, one extra
little bit of markup in the library definition. But if that's not enough, we can also do custom
rules. You know, part of what we did going back to the BTK days, we've always had our
C++ automatically wrapped into other languages. So it's BTK since, gee, before 2000 has had--'94
has had a C++ parser built into it that parses just the header files, and then outputs into
Java wrappers or Tickle or Python wrappers. So, we've always needed a complicated build
system that can handle the concept of building some code which would be the generator. Then
later, taking that execute roll that was built and then running it on part of the source
code for the project, generating more source code and then compiling that source code.
Getting that to work with, say, something like a Visual Studio project by hand is somewhat
tedious and error-prone. With CMake, you create a couple of custom commands and then it's
going to work across all these platforms I talked about. It's going to work in Xcode.
It's going to work in all the Visual Studios. It's going to work in Makefiles. So, it gives
you--you know, it's got a lot of power there. And it's also got configuration rules for
doing system introspection. It's got its concept of persistent variables. These options can
get cached. So, I really don't--try to avoid the concept of environment variables. We see
a lot of projects where, you know, to build this project, you know, set these five and
six environment variables and then type "make." And if you forget to type, forget to set them
or maybe you set them the first time you type "make" and then you go to a new shell and
they're slightly different, you type "make" and things start blowing up and you're scratching
your head, going, "What happened?" When CMake finds something, those find packages I was
talking about earlier; it stores the location in a persisting cache file in that build tree.
So, you can have the multiple build trees. They can have different configurations, different
things they found and it's all stored there and it's not stored in environment variables
and it's not going away; it's tied to the build tree.
>> [INDISTINCT] >> HOFFMAN: No, it's not. Yes. It caches it
once it found it. And then you can go in with the cache editor and remove it and have it
find again if you want it to. But again, it's there for performance as well. It also has
a really good implicit dependency generator for C, C++ and Fortran. A lot of the supercomputing
folks are still writing a lot of Fortran, or even writing new Fortrans; Fortran 95 and
stuff. And Fortran actually has this module system which is sort of like, Java. I think
it's what they stole it from, but it's--it makes it a real bear for a build system because
essentially what happens is instead of just saying, you know, include "foo.h," you say,
"use module B," all right? Well, you never write "B.H." When you compile B, it spits
out B.mod. So, the compile orders and Fort--if you compile A before you compile B and A uses
B, it won't compile. So, originally, one of the guys at Kitware converted a--it was a
chemists' or physicists' code, old Fortran code, and he said, "Explain to me how to use
it." And he said, "You know, you run CMake and then you type 'make'." They guy ran CMake
and he type "make" and he went, "It's built. I only had to type 'make' once?" Usually,
I take it four or five times because that's what these Fortran guys do. Anyways, we do
dependency analysis. And this is also real important for developing because especially
with C++, if you have a header file that's out of date with the source file and your
build system lets you down; if you got one object file that thinks a class is 100 bytes,
and another thinks it's 104, you can get really weird bugs. And you walk into the debugger
and you're really scratching your head. That kind of stuff can waste hours of the developer's
time. >> [INDISTINCT]
>> HOFFMAN: Yes, it handles that. Yes. >> [INDISTINCT].
>> HOFFMAN: Fair question. We used--we used CMake actually in the make files at least
at build time. So, Make will call back on CMake to have it to do some dependency analysis.
And also when you create those custom rules, you have to--you have to describe your inputs
and outputs. So, we know--we know that something is going to be an output. And then, when we're
scanning for dependencies, if we see one of those outputs show up as an input in a regular
.C file, it knows then that that's coming. And it does sort of a multiple recursive make
kind of thing to make sure that everything happens in the right order.
>> [INDISTINCT] >> HOFFMAN: We do callback into it from other
environments as well when we need to. So, it can get around problems and--so, for instance,
in Visual Studio, if you change the CMake list file, it will notice that. When, say,
you try to build in Visual Studio, it'll say, "Hey, look, the CMake list file changed."
It'll rerun CMake and then CMake can actually--it'll go back and detect a running Visual Studio
and see if it matches that project. If it does, it actually loads some macros that keep
it from saying, you know, reload, reload, reload. It can have it do one reload and resume
the build. So, we've got some visual basic plug-ins.
>> [INDISTINCT] >> HOFFMAN: Yes. We haven't had to do it with
Xcode and--yes. Okay. We also handle link dependencies. So inside your project, you
can say, you know, "add library B" and then you can say "library B links to C and D."
And then if you create an executable, you can just say, you know, "link to B" and it
will pull in the other dependencies. It makes easier to handle large, complicated projects.
We also support ordering of link search paths and building rPaths into executables, so you
can run out of the build tree without having to set low library paths. But then when you
do install, it can actually rewrite the rPaths and it actually edits the ELF, so it's fast
during the install. It also has, you know, really fast makefiles. It does nice color
output. You can see on the slide here it's got different colors for different types of
builds. So C++, building an object shows up in green. Linking something shows up in red.
It's got help target so you can type "make help," it'll print out all the targets. You
can do things like make foo.i. It'll preprocess it and it will handle that cross platform
and you can do assembly targets as well. The input to CMake, so what does it look like?
It's a simple scripting language with built-in commands for common rules. So here's an "add_library(MyLib
MyLib.cxx)" or "add_executable." And this bottom example here shows sort of the power
of this. So I'm in one, two, three, six lines, I'm saying that--I'm telling it which CMake
I want to use. I'm giving it a name of the project and then I'm saying "find_package(Boost
REQUIRED thread signals)." And then I'm telling it to use "include_directories($(BOOST_INCLUDE
DIRS))," and then I'm adding an executable, MyExecutable with some C++. And then I'm telling
you to link to the Boost LIBRARIES. And that's going to work on the Mac. It's going to work
on Windows. It finds the installed boost. If it doesn't find it, you can tell it where
it is by editing the cache through our editors. If it's installed in the usual location, it
should be able to find it. So, installing CMake. So, one of the things that--the biggest
roadblock to adaptation of CMake is, "Well, gee, now I have to depend on CMake." So we
tried to make it really wicked easy to get CMake and install it. So on's download
page, we've got binaries for every UNIX--I don't know, five or six UNIX like HP and RX
even. We got three or four downloads a month for something for these weird platforms, but
we do it, yes. And also, the major links distributions these days especially with KDE's adoption,
you can usually just say, "I have to get a CMake." But we provide the binaries for all
the platforms and we'll keep doing that. Installing it, you can grab a Windows Binary installer.
There's Linux Binaries and the source can bootstrapped on any UNIX platform. So this
is really what the process does. So when you--when it configures, it will read the cache file
that I talked about if it exists. Then it'll read the CMake list files and they can include
other CMake list files and have subdirectories. And it'll write the cache back out. And this
can be an iterative process because you might be able to--you can imagine a thing where
someone clicked an option that said "use MPI." Well, now, it's got to find MPI or enable
parallel, and then it turns something on and it exposes some other option. Once you're
happy with that, it will write out the makefiles or projects. And these are what the editors
look like. There's a graphical one written in Qt that has, you know, nice little buttons
and stuff to click, turn things on and off. And there's also a courses-based one if you
don't have a windowing system. And, of course, you can run it from the command line. You
know, people get scared of these, you know. "Oh, do I have to run a graphical thing to
build my toolkit?" No, no, no. You can run from the command line if you--you know, if
you know the exact options you want, and you're not--and you want to automate a build or--a
lot of people use it like this. It also has some--you can write scripts in the CMake language
and CMake has some simple commands like "cmake-E," things like copy, remove a file, compare,
get the time. And this is so you can imagine building up a whole nice cross platform build
system and then calling CP in the middle of it. That's like, "I work too well on Windows."
>> [INDISTINCT] >> HOFFMAN: Right. Right. So the only thing
we want to require is CMake and the only thing CMake requires is a C++ compiler. So if you
got a C++ compiler, if you're building C++ code, you figure you have to have a C++ compiler.
So that's our minimum requirement is a decent C++ compiler and you can grab the binaries
but we build this in. And that's what I'm saying that CMake is actually called by Visual
Studio, by a lot of projects, to do--to do these sort of cross platform things that aren't
easy to do cross platform. There's also a scripting language. You can do it CMake-P
and write CMake language and it wouldn't generate a cache. It ignores commands specific to building
like "add_library" or something like that, if you want to do something a little more
complex. But again, we want to try to avoid having dependencies on other bigger systems.
Okay. That sort of wraps up the CTest/CDash part of the talk--I mean, the CMake part of
the talk. We're on to the CTest/CDash. So this slide here is--it's actually a pretty
old slide. It's from a book from 1999 and it's showing the benefits of automated testing.
So it's showing over time that if you did--a little bit of long test case. But say you
had a test case that took 45 days to complete each manual retest. Over a decade, your manual
running those tests every time you did a release is going to cost you about 97% more than if
you had some--push a button and have it go. It kind of goes without saying but it's nice
to show people this. And a lot of software that's out there especially open source stuff
doesn't necessarily have as much rigorous testing. And our system is built around that.
I mean, the developers, we don't--we didn't want to create a system that was overly intrusive.
And it's an easy system to use. It's easy to apply on projects and the developers actually
like it. And we've had interns come on and, you know, do code coverage which sounds like
a really boring job but it's really neat when they can watch. Every day they can see their
progress. They can, you know, "Hey, look, I got it to go from, you know, 75% to 78%
today by adding these tests." So it gives them something to look at, something to work
forward--work toward and it's neat stuff. So this video here--let me show this. This
is running a nightly regression test of ParaView. And we've recently added into CTest the ability
to do parallel, so you can say "CTest minus JN" and run. This is running on like, I think,
an eight-core Windows box and it's doing a minus JA and running our application here.
And this is doing screen captures in the background and making sure the images look like what
we expect. And this stuff goes on every night at Kitware on a bunch of different platforms.
And it--we can take advantage. We're trying to make it scalable, so it'll take advantage
of all the cores that are out there. >> [INDISTINCT]
>> HOFFMAN: Yes. You can add test dependencies. So that was one of the first things--we added
this and then you run an ITK and you say go and like, "Gee, why did these 50 tests fail?"
Yes. That actually happened with ParaView. Yes. The test case opened up a socket and
testing client server. So can you add dependencies on the test and then they'll make sure that
they--it blocks until they run. >> [INDISTINCT]
>> HOFFMAN: You just say I can only--you can only put dependencies on a test like, you
know, test--test name depends on some other test name.
>> [INDISTINCT] >> HOFFMAN: Yes. There's a flag you can--there's
a property you can set on the test that says, "I need to be run by myself." But you can
also say, "I need to rerun after this test," because a lot of times people write a test
suite where the output of one test is used as the input to the next test and they just
assume that they're going to run in linear order. And you can set up that dependency
as well. But, you know, if you've got ideas, too, I'm open to ideas of, you know, adding
new stuff into it as well. The executable--so to create a simple test, you write "add test,"
you give it a name and then some executable will run in some arguments. It's expected
to pass if it returns zero. You can also set it up so it will pass based on some regular
expression matching of the output. >> [INDISTINCT]
>> HOFFMAN: Not flakiness. You can set expected to fail.
>> [INDISTINCT] >> HOFFMAN: Yes, although that's a really
good idea because we've got some tests like that as well.
>> [INDISTINCT] >> HOFFMAN: Yes. No, that's a really good
idea and we've actually got a few of those tests in CMake. I think right now the packaging
on Apple for--occasionally just doesn't work. Yes.
>> [INDISTINCT] >> HOFFMAN: All right. So, to run the test,
you run--there's an executable called CTest which is delivered with a CMake binary and
it can run either for this continuous integration testing which--we'll get into how that works
as a client for CDash or you can just run it from the command line. You can type, you
know, CTest before you check code in. And it can be used on CMake-based projects or
even not CMake based projects if you wanted to. So here's the dashboard that I've been
talking about. If you look--let me see if I can get this. So down this side here are
the build names. So the neat thing about this testing system, I think, is that it's very
easy for outsiders to contribute to it. So oftentimes, someone might come to me and say,
"Hey, can you port CMake to this?" And I go, "Yes, if you run a dashboard," because otherwise,
I'm wasting my time. We had a guy--there was a guy down `in Australia that uses QNX. He's
using VTK and QNX operating system. He does these room-size robots that do mining in Australia
in the outback. And he's big CMake fan and he came--years ago, he came to me and said,
you know, "Hey, can you get CMake work in QNX?" And I said, "Sure, if you run in dashboard."
He's been running a continuous and a nightly dashboard for years now and I've actually
developed stuff for QNX by checking in code with a couple of print statements, you know,
"if dev QNX," wait five minutes, look at his results and I'm like, "Ah, that's what it's
doing on that platform." And anyway, you know, he has different time zone and everything,
and I've been able to work with it without even bothering him. And the nice thing is
I know it's going to keep working. As soon as it's off the dashboard, you know, we're
going to break it if you're not testing it. So anyways, down this side is the site name
over here and you can pretty much pick that. Some are Kitware, some are not. This is a
build name. So, this one is a Linux 64. There's a Darwin, which is a Mac, another Linux 64,
another couple of Macs. We've got a style checking test that checks that C++ coding
style. And then over here, it's got the update column which is what files were updated from
the version patrol system. There's a master one up here. You can look at the nightly changes.
This is where--what just what happened when it--and the nightly builds actually pull from
a--you set a timestamp. So you say, you know, "9:00 Eastern Standard Time," is when I want
my dashboard to run. And then a CTest client looks at that configuration file and it pulls
from the version control system a copy of the code that matches that time. So, each
one of these is testing the exact same snapshot of the code. You can set it up to do revisions,
yes, if you want. But the nightly--yes, even if you're on, say, a branch, you want to still
pick a specified time. You can also do experimental builds which are just any random thing that
comes up. And then you can also do the continuous builds which are set up to run, you know,
as soon as something is checked in to the repository.
>> [INDISTINCT] >> HOFFMAN: Yes. Yes, you can tell it to remove
the whole binary tree if you want. Usually, you do an out of source build and then you
can set off liability to clean build tree. [INDISTINCT] stuff. This was a project we
did with Sandia National Labs. And they have a project called Trilinos and the idea here
is they have about 42 subprojects and they've got teams of people working on each one of
those subprojects. And the guy who's working on a Tethos or Epetra, he doesn't want to
hear about the problems in Zoltan. He doesn't want to get emails from the dashboard System
when Zoltan guys breaks them. He doesn't care. And it doesn't affect him but again, someone
might care about all of Trilinos, someone who will be able to see it as a whole. And
we created this concept of subprojects in the dashboards so that the, you know, the
Tethos guy can set up, you know, "I want to get emails when anything bad happens on my
project and not when anyone else has something." So, this--so, it can be as to scale at really
large projects. It also supports these query filters so you can set up you do show filters
or hide filters, and then you can things like, you know, show all the--show this build name
matching this over this date range. You know, I want to see this particular platform for
the past five days, and it'll show all the builds for that. Or even I have a custom one
that shows me basically all the errors. I can--I can click on it. I save it as a linked
up the top. Here, I think it--yes, there it is. I've got it saved as a link up here in
my taskbar. CMake only errors, so I click on that and it's a saved query. You can click
on create a hyperlink from the query. It's all PHP stuff. Pretty straightforward web
stuff, but it's a nice stuff to have. Another thing we recently added--in prior versions
of CMake, we always did, essentially, log scraping. And some of the build tools like
Digital Studio, we still do log scraping which is let the tool run and then have a bunch
of regular expressions that pull out things that look like an error, you know. Look for
error colon or whatever, and--which is okay, but we might--you know, if something like
this particular one around here. With C++, if you have one error, you're likely have
50 in the file. And then what this does basically is you can stick in the mode, because we don't
want to pay the performance every time, but if you're building it as a nightly test, you
can set up in a mode where it'll essentially wrap each command in the make file with a
"CMake--" run the command--launch the command. And it launched it through that and then if
it returns a non-zero value, it knows it failed and then it can store the standard error and
standard output, and show it on the dashboard as one single failure instead of lots of little
failures. Again, this isn't supported across all the build tools we have, but at least
on the make files, it works. We also have a coverage built-in. So, there's two tools
right now that we support for doing coverage analysis. The one is standard Gcov, and the
other is our commercial tool called Bullseye, which does branch-based coverage, which is
actually a kind of neat concept with. Instead of just showing number of times a particular
line of code was executed, it will show--if it's got a branch, like down here it's got
an if statement--I don't know if you can read it. But it's got a capital F there, meaning
that it was only ever felt when it was run. And again, these tools are things that developers--everybody
knows, it's good to do code coverage. And I think I ran Gcov once. So, CMake is essentially
an expert system that's--it's stored this information, it knows how to run Gcov for
you. You set it up and then all the developers on the team can see their coverage without
having to run the tool. It's run for them every night. You can watch the coverage go
up and down. You can see it when your tests come. You can even have it send you email
when a code--a particular file is low on code coverage for the testing. And we also do the
same sort of thing with Valgrind and Purify. So you can--and again, these are tools--when
do people run Valgrind? When there's something weird is happening, right? But it's not necessarily
part of every developer's--you know, before they check in their code, do they always run
it? Maybe, maybe not. You don't know. With this, we know. We can say--you know, when
that showed up, you know, when it crashed, you go right back to the dashboard, "Why didn't
Valgrind catch that?" "Oh, we didn't have good code." And then you go back and you find
out it wasn't covered, and that's why it wasn't--it wasn't caught. But again, these are tools
that people know how to use, but they might not use it everyday because they're busy,
they want to get stuff done. And the sooner you catch bugs like this, the better. I mean
if someone--as soon as someone has checked in that bad array bounds right or memory leak,
if you can go back in the system even if they didn't catch it, you can go back and see which
files were checked in that day; it's a lot easier to fix than if you're sort of scratching
your head trying to figure out, "Gee, no one's run this for a couple of months." And you
know, you get the idea. It can send you emails. This is a typical CDash email notification
saying something like, you know, "A submission to CDash for this project CMake has failed
test. You've been identified as one of the developers to check something in." And it'll
give you a link and a short little description. And if you want to try it, we've got it posted
at Kitware. We've got it at "" You can just click on there and say, "Start
my project" and it'll create a CDash thing for you. And then it creates a configure file
that you can drop in your CMake list and you can get going with it pretty much right away.
And it's all open sourced as well, so you don't need to use us. But if you don't happen
to have a LAMP stack web server lying around, you can--you can try this. How do email know
which tests failed? So, when it--when the client did a build, it created an XML file
that showed which tests failed. Send it up to CDash, CDash said, "All right, these tests
weren't failing. The last time I get a build from this machine, those test weren't failing."
>> [INDISTINCT] >> HOFFMAN: For the whole process. So when
I run CTest where I'm doing a nightly or continuous, it'll collect up all the test results and
put them in an XML file. And each test has a name. So, you can see here like system information
new, that's a test name in CMake. >> [INDISTINCT]
>> HOFFMAN: Well, there's two ways you can do it. One is an executable return 01. Or
two, you can do some sort a regular expression matching in the output of the program. So,
you can say, you know, "set test pass regular expression," or "set test failed regular expression."
So, those are the two ways you can make it. >> [INDISTINCT]
>> HOFFMAN: Yes. There's a one to one, yes. >> [INDISTINCT]
>> HOFFMAN: And then it looks at the version control information, the update line to see
who's checked in code. Now sometimes, we get some false positives here. I mean, you can't
narrow it down, you know. Twelve people checking code all at the same time, when one test starts
failing, they'll all get the email. But they quickly point the finger at the other guy
through a crowd; "It wasn't me." >> [INDISTINCT]
>> HOFFMAN: So, you have like one test that runsů
>> [INDISTINCT] >> HOFFMAN: Right. But do you--do you run
it as one big executable? >> [INDISTINCT]
>> HOFFMAN: Yes. It'd be good things to look at. And I believe there have been some people
that done the Google Test Integration with CTest. I don't have slides on that right now,
but it's been done. But yes, that's great idea for future work. Yes, a lot of times,
we'll--we have like an IT, okay, we create one test executable for lots--it contains
the ability to run lots of tests, but then we just run it multiple times for each individual
test. But I suppose if you--I guess the danger of running them all at once, if there's a
crash, you pretty much loseů >> [INDISTINCT]
>> HOFFMAN: I posted one. I talked about that. So, a little overview of that CDash testing,
we've got a Purify Valgrind support, coverage support through Gcov or Bullseye. This next
bullet is sort of a thing to think about. When you want configuration coverage, it's
not necessarily part of a CMake or--GTest does it automatically for you, but if you
look at our dashboard, you can see we're testing--almost all of our builds have spaces in the paths
because that always seems to trip people up. And if you're doing a build system, you definitely
want to be testing that. And then you also want to be testing different versions of OS.
I've got of stack of Mac Minis. Every time there's a newer version of Mac comes out,
I buy another Mini and stack it on top of the stack and run the nightly tests for that.
And then, there's, you know, making sure you're covering all the libraries and options within
your code. And then, the other--this final one, CDash, I mean, it doesn't have support
for actually doing the image differencing, but it can support--you'd put some XML markup
in your output of your program to tell it that, you know, here's a base test, here's
the image I got and here's the difference. And then, CDash can display that online. And
this has been going on with VTK since, you know, a long, you know, decade or so. And
we found out early on that, you know, all OpenGL implementations are not created equal.
They're--have--and sometimes validly different, so it supports--you know, you might have,
you know, something triangulates one way and something goes the other way, and it shows
up as an image difference, but it's really not a failed test so we can support multiple
valid images. Okay, the final part of the talk is CPack. Sure.
>> [INDISTINCT] >> HOFFMAN: Right now, it's a serialization
on one and using the cores. Although, we test things like ParaView which use MPI, but, you
know, runs MPI launch to run those tests. But we have been--we've done some experimental
work with hooking together with some of these batching systems for the supercomputers. So,
if they want to test something on a supercomputer at Sandia, they need to--basically, you have
to schedule it. You can't--you can't just run it. So, we've got some basic support in
there for that right now, it's in the development branch. It's not in any release. But the idea
there is you could specify some tests and have it--batch them off. And then it sits
around and waits until they actually get results and then collect them up. So, we're working
on that, but there's nothing actually in the release.
>> [INDISTINCT] >> HOFFMAN: Yes. All right, so I'm going to
CPack now? >> Yes.
>> HOFFMAN: So, CPack is bundled with CMake, and it creates professional-looking installers,
platform specific. Again, it takes the same model as CMake. It's a meta-installer tool.
We use Nullsoft on Windows and we can create Tar.gz files. We can also create OSX Package
Maker, RPMs, Debians. And again, this is--this thing that a lot of developers--I remember
early on, it kept [INDISTINCT] releasing CMake. There was like one guy. He's the guy that
knows how to run our installer system, you know. If he's not around, you can't make a
release, you know. It was one machine that knew how to do it. Again, this sort of takes
that and gives it to developers in a much easier way to--if they know how to write the
install rules for CMake in their project, then they can write a nice installer. And
it also supports component-based stuff so you can do the typical stuff of a, you know,
install with or without headers, with or without libraries, that kind of work. And there's
a Wiki page talking about how to do the component installers. Using it on Windows, you then
install the command line zip program or NullSoft installer and then inside your project, you
set some CPack options, variables that you need. You include the CPack module and then
it'll reuse your existing install rules. And then running it, you can type "make package"
or "make package source." You can run CPack and give it a configuration file and tell
it to do NullSoft or ZIP. It's fairly easy to use. There's some more information about
CPack in the book and in this Wiki page. And here's--finally, I want the story to end with
a real simple example. This is a Qt example. And it's fairly short so it basically says
up the top, I want version 2.8 of CMake or greater. And I'm saying "project HelloQt"
and then I'm saying "find required dependencies." I say, you know, "find package Qt required."
And then I add an executable, and right here, I'm saying it's a 132 executable so it doesn't
pop up the console. And it's also MACOSX bundle because that's what the Qt thing is going
to be. And here's my source file. And then I'm saying "target link libraries" to the
target name, the executable hello.Qt. And I want to link in the Qt main library plus
the Qt libraries and then I'm going to install it. So I'm going to install TARGETShelloQt
which is the name of the executable to a destination directory bin. I want to include this "install
system required libraries" which will, on Microsoft, pull in the required runtime library
plus the manifest, side by side manifest stuff that you need. And then I'm setting a version
from my project 1.0 and then finally, I'm saying that I want to package this executable
HelloQt and I'm going to call it "hello Qt" and then it includes CPack. And now, I think,
to me, this is pretty cool. And this next one is the same thing except now, I want to
use boost inside my Qt, because I'm going to try the boost signals instead of signals
and slots from Qt. And to do that, I need to add four extra lines into the project.
Basically, I'm telling it to use the static boost. This is because the binary from Windows--the
binary from Windows is a--for boost comes as a static library. And I'm saying I want
boost required and I want to make sure it has the signals library built with it. And
then I'm going to include the boost include directories and finally, I'm going to link
the boost libraries to the target. And then the rest of the stuff is the same. So now
that--I've got something to use Qt and boost and it'll work on Mac and Windows. So that's--and
if you try to do that, you know, with make files or extra projects, it would probably
take you a lot longer I would imagine potentially. Finally, this is a slide on one of the hot
areas of research at Kitware is informatics visualization. So we're actively working with
Sandia. Most of our history about visualization is looking at things that sort of make sense
on the screen like a CT scan. You look at that, you look at the visualization. I mean,
you go, "Yeah, that's--you know, that's the inside of somebody." You know, you look at
this thing over on the left; you go, "Ah." You know, what about when looking at class
hierarchy? How do you visualize that? And this is probably something that happens at
Google all the time. But we're working on that and we're--and also in a scalable way
so it will work on supercomputers and things like that. The idea here is to--maybe we can
leverage some of this into the dashboards or testing. We're not sure really where it's
going but that's the general idea. So, in summary, I talked about CMake, the build tool
and testing with CTest and CDash, and finally, deploying your software with CPack. And there
are some links to,, and There's my email. I'm always
happy to answer email. I love reading email. And thanks for your time but I got one more
thing I want to do. I was going to do a quick demo here. So I'm going to drop out a PowerPoint.
But this is the last slide in the PowerPoint. And let's see if this works. Do a live demo.
So I'm going to run--I've got that example of Qt that I talked about with boost. And
of course it's too big because I had to change the resolution of the screen. So configure
and we'll pick--we'll try Visual Studio 2008 and it's testing the compiler down here. It's
checking the compiler ABI. It's found Qt. It's found boost with signals. You can see
it found my open source build of Qt. I can look at an advanced view. I can look in a
group view so we can see--let's see, what did it find with boost? It found program files
boost 3.8, found the library directory for boost. You get the idea. We'll configure again,
generate, and then I'm in example QtB which was this directory here. It'll pop up the
solution. A little bit. And then I'm going to package it. And it's running NullSoft for
me and this is the same source code as CMake that I just showed in the slide. [INDISTINCT]
and if you look here on the hold, we have an installer. It's pretty minimal. It's got
some defaults installed into HelloQt. And we're done. And then if you look over here,
there we are. And it's highlighted to this menu, you run it and we got "Hello world."
Yay. And let's look at this slide one more time. And now I'm done. So any questions orů?
>> [INDISTINCT] >> HOFFMAN: Well, there's--I mean there's
a big example out there. If you want to look at one, there's KDE. I mean that's probably
one of the largest open source efforts out there. It's probably a couple of million lines
of code. >> [INDISTINCT]
>> HOFFMAN: Yes. You do set something. You'll set library sources and you'd give a big list
and you could, you know, put some [INDISTINCT] and, you know, "if Apple, include these,"
that type of thing. Usually, in really, really large projects, they'll probably--you can
create macros or functions in the CMake language and they have like, you know, add KDE library
in it. It might do some extra stuff for you but it does scale into really, really big,
big projects. >> [INDISTINCT]
>> HOFFMAN: No, no, no. >> Okay.
>> HOFFMAN: Yes. No, no. It would be something more like--you know, I could--I don't know
ifů And this isn't a huge project, CMake itself butů So there's sort of a source list,
some saying set sources and I'm listing them out. And then right here, so I've got, you
know, this ELF sources which is the optional thing. So if I'm on Windows, I don't have
the ELF editor. But it's got each line, so it wouldn't get complex every time. And then
here, I'm adding--you know, if I'm in UNIX, I'm going to do KDevelop. And there are some
Apple sources for doing the X code generator. And there's the Visual Studio stuff. So--sure.
Good question. Probably should modify the slides to show more complicated things as
well. You want to show it's easy to use, but it also scales. Yes.
>> [INDISTINCT] >> HOFFMAN: No, we don't. I mean it's--CMake
has got to be around. I mean, we're using it for the copy commands. We're using it for
the installer. I mean, it's like the install rules in a cross platform installing, it's
a CMake command that's actually running to do all that work. So there's a lot of infrastructure
that you really need cross platforms so either--then you end up saying, "All right. Well, you don't
have to have a build tool around. We can do Visual Studio projects but you have to have
Python because we got to do this extra stuff anyways." So the idea is to have a minimum
requirement which is the C++ compiler. That was our take, you know, 10 years ago when
we started this. I'm still standing by. >> [INDISTINCT]
>> HOFFMAN: Really well. I mean, I think when KDE went over the build times went significantly
down from their auto tools build. KDE? They use a build farm and stuff. So it's--it was--I
mean it's a couple of million lines to C++ type thing.
>> [INDISTINCT] >> HOFFMAN: They're using DCC I think. You
just set the compiler to be DCC and then run CMake.
>> [INDISTINCT] >> HOFFMAN: Depending on what you do. I mean,
yes, it depends on the size of the project and how many TRI compiles you're doing. So
if you go crazy with, you know, checking for every possible header or whatever, it can
get slower but it only does it once so. >> [INDISTINCT]
>> HOFFMAN: No, they've all been cached. Yes. >> [INDISTINCT]
>> HOFFMAN: Yes. It should be under a minute, I would think, on a reasonable machine. Like
the Trilinos Project takes about a minute or so. It's like 42 packages and each one
has several libraries and executables so it's probably, you know, something along that scale.
Depends on the computer and all that, but we do try to make it as fast as possible.
>> [INDISTINCT] >> HOFFMAN: Yes. But yes, we spend a lot of
time optimizing it whenever we got the chance. I think we're about done then.
>> Yes. >> HOFFMAN: Yes.
>> [INDISTINCT] >> HOFFMAN: Thanks for inviting me, I had
a great time.