ub1404 dev workflow

Tweaks to the recipes to avoid repetition of work, and ub1404 dev,
* let apt cookbook handle apt-update globally
* do not download, configure, make, make install if the package is
already installed
* add guards for file deletion to first check whether file is present
* use docker cookbook for image building and running, to only build if
not alrady built and only run if not already running
* drop mysql table and recreate each time

Also,
* bump Docker cookbook to 4.9.3
* bump mysql cookbook to 8.5.1
* add apt cookbook for better apt-update management
* bump depends versions and add apt
* modify readme with customization instructions
* modify all chef runlists to call apt first in the runlist
* add a vagrantfile for dev of ub1404
This commit is contained in:
Dave Eargle 2019-10-26 02:44:08 -06:00
parent 48615d1d22
commit 72dc282aa0
123 changed files with 1761 additions and 7241 deletions

View File

@ -53,6 +53,29 @@ Thanks to [Jeremy](https://twitter.com/webpwnized), you can also follow the step
https://www.youtube.com/playlist?list=PLZOToVAK85MpnjpcVtNMwmCxMZRFaY6mT
### ub1404 Development and Modification
Using Vagrant and a lightweight Ubuntu 14.04 vagrant cloud box image, you can quickly set up and customize ub1404 Metasploitable3 for development or customization.
To do so, install Vagrant and a hypervisor such as VirtualBox. Then, visit the `bento/ubuntu-14.04` page and find a version that supports
your hypervisor. For instance, version `v201808.24.0` is compatible with VirtualBox.
Install the vagrant virtualbox vbguest plugin:
vagrant plugin install vagrant-vbguest
Then, navigate to the `/chef/dev/ub1404` directory in this repository. Examine the Vagrantfile there. Metasploitable ub1404 uses the vagrant `chef-solo` provisioner.
To this Vagrantfile, add the metasploitable chef recipes that you desire -- you can browse them in the `/chef/cookbooks/metasploitable` folder. Or,
add or edit your own cookbook and/or recipes there.
From the `/chef/dev/ub1404` directory, you can run `vagrant up` to get a development virtual ub1404 instance. After the initial `up` build and provision,
when you edit the chef runlist or when you edit a chef recipe, run `vagrant provision` from the same directory. For faster development, you can comment-out
recipes that you do not need to rerun -- but even if they are all enabled, vagrant provisioning should not take longer one or two minutes.
Chef aims to be idempotent, so you can rerun this command often.
Consider taking a snapshot (e.g., `vagrant snapshot new fresh`) before modifying recipes, so that you can always return to an initial state (`vagrant restore fresh`).
If you want a _totally_ fresh snapshot, you can do the initialization with `vagrant up --no-provision`, then take a snapshot, followed by `vagrant provision`.
## Vulnerabilities
* [See the wiki page](https://github.com/rapid7/metasploitable3/wiki/Vulnerabilities)

View File

@ -0,0 +1,368 @@
# apt Cookbook CHANGELOG
This file is used to list changes made in each version of the apt cookbook.
## 7.2.0 (2019-08-05)
- Allow you to specify dpkg options just for unattended upgrades - [@majormoses](https://github.com/majormoses)
- Adding documentation and tests for setting dpkg options unattended upgrades - [@majormoses](https://github.com/majormoses)
- Test on Chef 15 + Chef Workstation - [@tas50](https://github.com/tas50)
- Remove tests of the resources now built into Chef - [@tas50](https://github.com/tas50)
- Remove respond_to from the metadata - [@tas50](https://github.com/tas50)
- Remove the recipe description from the metadata as these aren't used - [@tas50](https://github.com/tas50)
- Replace Chef 12 testing with 13.3 - [@tas50](https://github.com/tas50)
- Remove Ubuntu 14.04 / Debian 8 testing and add Debian 10 testing - [@tas50](https://github.com/tas50)
## 7.1.1 (2018-10-11)
- Allow to customize sender email for unattended-upgrades
## 7.1.0 (2018-09-05)
- Add the installation of dirmngr and gnupg to the apt default cookbook to support secure repositories
- Added support for the unattended-upgrade SyslogEnable configuration feature
- Added support for the unattended-upgrade SyslogFacility configuration feature
## 7.0.0 (2018-04-06)
### Breaking Change
- This cookbook no longer includes apt_preference as that resource was moved into Chef Client 13.3. The cookbook now also requires Chef 13.3 or later. If you require support for an older release of Chef you will need to pin to a 6.X release.
## 6.1.4 (2017-08-31)
- Restores ignore_failure true on compile time update.
- name_property vs name_attribute in the resource
## 6.1.3 (2017-07-19)
- Fixed typo in readme
- Fixed config namespace in the 10dpkg-options file
## 6.1.2 (2017-06-20)
- restore backwards compatability by respecting node['apt']['periodic_update_min_delay']
## 6.1.1 (2017-06-20)
- Remove action_class.class_eval usage that caused failures
- Remove wrong warning logs generated by apt_preference
- Fix wrong warning log in cacher-client recipe
## 6.1.0 (2017-04-11)
- Test with local delivery and not Rake
- Use proper value type for bsd-mailx package only_if/not_if block
- Update apache2 license string
- Convert apt_preference to a custom resource
## 6.0.1 (2017-02-27)
- Update cookbook description
- Testing updates for Chef 13 and fixes to the cacher recipe
## 6.0.0 (2017-02-08)
### Breaking changes
- apt_update and apt_repository resources have been removed from the cookbook. These resources were both added to the chef-client itself. Due to this we now require Chef 12.9 or later, which has both of these resources built in. If you require compatibility with older chef-client releases you will need to pin to the 5.X release.
### Other changes
- apt_preference resource now properly required a pin_priority, which prevents us from writing out bad preference files that must be manually removed
## 5.1.0 (2017-02-01)
- Convert integration tests to inspec
- Add management of the /etc/apt/apt.conf.d/10dpkg-options file with new attributes. This allows tuning of how dpkg will handle package prompts during package installation. Note that Chef 12.19+ will automatically suppress package prompts
## 5.0.1 (2016-12-22)
- Avoid CHEF-3694 in apt_preferences resource
- Cookstyle fixes
## 5.0.0 (2016-10-14)
- Remove search logic from the cacher client cookbook and rework attribute structure. See the attributes file and readme for new structure. Determining what servers to cache against is better handled in a wrapper cookbook where you can define the exact search syntax yourself
- Corrected readme examples for the cacher client setup
- Depend on the latest compat_resource
- Define matchers for ChefSpec
- Testing updates to better test the various recipes and providers in the cookbook on Travis
## 4.0.2 (2016-08-13)
- The cookbook requires Chef 12.1+ not 12.0\. Update docs
- Test on Chef 12.1 to ensure compatibility
- Restore compatibility with Chef < 12.4
## 4.0.1 (2016-06-21)
- Fix bug that prevented adding the cookbook to non Debian/Ubuntu nodes without chef run failures
## 4.0.0 (2016-06-02)
This cookbook now requires Chef 12\. If you require Chef 11 compatibility you will need to pin to the 3.X cookbook version
- The apt-get update logic in the default recipe has been converted to apt_update custom resource and compat_resource cookbook has been added for backwards compatibility with all Chef 12.X releases. In addition this resource is now included in core chef-client and the cookbook will use the built-in resource if available
- Added support for the unattended-upgrade RandomSleep configuration feature
- Added support for the unattended-upgrade Unattended-Upgrade::Origins-Pattern configuration feature
- Added Chefspec matchers for apt_update
- Fixed apt_repository documentation to correctly reflect the deb_src property
## 3.0.0 (2016-03-01)
- Removed Chef 10 compatibility code. This cookbook requires Chef 11 or greater now
- The default recipe will no longer create /etc/apt/ and other directories on non-Debian based systems
- Updated the autoremove command in the default recipe to run in non-interactive mode
- Added CentOS 7 to Test Kitchenwith tests to ensure we don't create any files on RHEL or other non-Debian hosts
- Updated Chefspec to 4.X format
- Properly mock the existence of apt for the Chefspec runs so they don't just skip over the resources
- Fixed lwrp test kitchen tests to pass
- Resolved or disabled all Rubocop warnings
- Enabled testing in Travis CI
- Removed Apt Cacher NG support for Ubuntu 10.04 and Debian 6.X as they are both deprecated
- Fixed + signs in packages names with the preference LWRP being rejected
## v2.9.2
- # 168 Adding guard to package resource.
## v2.9.1
- Adding package apt-transport-https to default.rb
## v2.9.0
- Add `sensitive` flag for apt_repositories
- Enable installation of recommended or suggested packages
- Tidy up `apt-get update` logic
- Fixing not_if guard on ruby_block[validate-key #{key}]
## v2.8.2 (2015-08-24)
- Fix removal of apt_preferences
## v2.8.1 (2015-08-18)
- Handle keyservers as URLs and bare hostnames
## v2.8.0 (2015-08-18)
- Access keyservers on port 80
- Adds key_proxy as LWRP attribute for apt_repository
- Fix wildcard glob preferences files
- Fix text output verification for non en_US locales
- Quote repo URLs to deal with spaces
## v2.7.0 (2015-03-23)
- Support Debian 8.0
- Filename verification for LWRPs
- Support SSL enabled apt repositories
## v2.6.1 (2014-12-29)
- Remove old preference files without .pref extension from previous versions
## v2.6.0 (2014-09-09)
- Always update on first run - check
- Adding ppa support for apt_repository
## v2.5.3 (2014-08-14)
- # 87 - Improve default settings, account for non-linux platforms
## v2.5.2 (2014-08-14)
- Fully restore 2.3.10 behaviour
## v2.5.1 (2014-08-14)
- fix breakage introduced in apt 2.5.0
## v2.5.0 (2014-08-12)
- Add unattended-upgrades recipe
- Only update the cache for the created repository
- Added ChefSpec matchers and default_action for resources
- Avoid cloning resource attributes
- Minor documentation updates
## v2.4.0 (2014-05-15)
- [COOK-4534]: Add option to update apt cache at compile time
## v2.3.10 (2014-04-23)
- [COOK-4512] Bugfix: Use empty PATH if PATH is nil
## v2.3.8 (2014-02-14)
### Bug
- **[COOK-4287](https://tickets.opscode.com/browse/COOK-4287)** - Cleanup the Kitchen
## v2.3.6
- [COOK-4154] - Add chefspec matchers.rb file to apt cookbook
- [COOK-4102] - Only index created repository
## v2.3.6
- [COOK-4154] - Add chefspec matchers.rb file to apt cookbook
- [COOK-4102] - Only index created repository
## v2.3.4
No change. Version bump for toolchain sanity
## v2.3.2
- [COOK-3905] apt-get-update-periodic: configuration for the update period
- Updating style for rubocops
- Updating test-kitchen harness
## v2.3.0
### Bug
- **[COOK-3812](https://tickets.opscode.com/browse/COOK-3812)** - Add a way to bypass the apt existence check
### Improvement
- **[COOK-3567](https://tickets.opscode.com/browse/COOK-3567)** - Allow users to bypass apt-cache via attributes
## v2.2.1
### Improvement
- **[COOK-664](https://tickets.opscode.com/browse/COOK-664)** - Check platform before running apt-specific commands
## v2.2.0
### Bug
- **[COOK-3707](https://tickets.opscode.com/browse/COOK-3707)** - multiple nics confuse apt::cacher-client
## v2.1.2
### Improvement
- **[COOK-3551](https://tickets.opscode.com/browse/COOK-3551)** - Allow user to set up a trusted APT repository
## v2.1.1
### Bug
- **[COOK-1856](https://tickets.opscode.com/browse/COOK-1856)** - Match GPG keys without case sensitivity
## v2.1.0
- [COOK-3426]: cacher-ng fails with restrict_environment set to true
- [COOK-2859]: cacher-client executes out of order
- [COOK-3052]: Long GPG keys are downloaded on every run
- [COOK-1856]: apt cookbook should match keys without case sensitivity
- [COOK-3255]: Attribute name incorrect in README
- [COOK-3225]: Call use_inline_resources only if defined
- [COOK-3386]: Cache dir for apt-cacher-ng
- [COOK-3291]: apt_repository: enable usage of a keyserver on port 80
- Greatly expanded test coverage with ChefSpec and Test-Kitchen
## v2.0.0
### Bug
- [COOK-2258]: apt: LWRP results in error under why-run mode in apt 1.9.0 cookbook
## v1.10.0
### Improvement
- [COOK-2885]: Improvements for apt cache server search
### Bug
- [COOK-2441]: Apt recipe broken in new chef version
- [COOK-2660]: Create Debian 6.0 "squeeze" specific template for
- apt-cacher-ng
## v1.9.2
- [COOK-2631] - Create Ubuntu 10.04 specific template for apt-cacher-ng
## v1.9.0
- [COOK-2185] - Proxy for apt-key
- [COOK-2338] - Support pinning by glob() or regexp
## v1.8.4
- [COOK-2171] - Update README to clarify required Chef version: 10.18.0
- or higher.
## v1.8.2
- [COOK-2112] - need [] around "arch" in sources.list entries
- [COOK-2171] - fixes a regression in the notification
## v1.8.0
- [COOK-2143] - Allow for a custom cacher-ng port
- [COOK-2171] - On `apt_repository.run_action(:add)` the source file
- is not created.
- [COOK-2184] - apt::cacher-ng, use `cacher_port` attribute in
- acng.conf
## v1.7.0
- [COOK-2082] - add "arch" parameter to apt_repository LWRP
## v1.6.0
- [COOK-1893] - `apt_preference` use "`package_name`" resource instead of "name"
- [COOK-1894] - change filename for sources.list.d files
- [COOK-1914] - Wrong dir permissions for /etc/apt/preferences.d/
- [COOK-1942] - README.md has wrong name for the keyserver attribute
- [COOK-2019] - create 01proxy before any other apt-get updates get executed
## v1.5.2
- [COOK-1682] - use template instead of file resource in apt::cacher-client
- [COOK-1875] - cacher-client should be Environment-aware
## V1.5.0
- [COOK-1500] - Avoid triggering apt-get update
- [COOK-1548] - Add execute commands for autoclean and autoremove
- [COOK-1591] - Setting up the apt proxy should leave https
- connections direct
- [COOK-1596] - execute[apt-get-update-periodic] never runs
- [COOK-1762] - create /etc/apt/preferences.d directory
- [COOK-1776] - apt key check isn't idempotent
## v1.4.8
- Adds test-kitchen support
- [COOK-1435] - repository lwrp is not idempotent with http key
## v1.4.6
- [COOK-1530] - apt_repository isn't aware of update-success-stamp
- file (also reverts COOK-1382 patch).
## v1.4.4
- [COOK-1229] - Allow cacher IP to be set manually in non-Chef Solo
- environments
- [COOK-1530] - Immediately update apt-cache when sources.list file is dropped off
## v1.4.2
- [COOK-1155] - LWRP for apt pinning
## v1.4.0
- [COOK-889] - overwrite existing repo source files
- [COOK-921] - optionally use cookbook_file or remote_file for key
- [COOK-1032] - fixes problem with apt repository key installation

View File

@ -1,2 +1,2 @@
Please refer to
https://github.com/chef-cookbooks/community_cookbook_documentation/blob/master/TESTING.MD
https://github.com/chef-cookbooks/community_cookbook_documentation/blob/master/CONTRIBUTING.MD

View File

@ -0,0 +1,215 @@
# apt Cookbook
[![Build Status](https://img.shields.io/travis/chef-cookbooks/apt.svg)][travis] [![Cookbook Version](https://img.shields.io/cookbook/v/apt.svg)][cookbook]
This cookbook includes recipes to execute apt-get update to ensure the local APT package cache is up to date. There are recipes for managing the apt-cacher-ng caching proxy and proxy clients. It also includes a custom resource for pinning packages via /etc/apt/preferences.d.
## Requirements
### Platforms
- Ubuntu 12.04+
- Debian 7+
May work with or without modification on other Debian derivatives.
### Chef
- Chef 13.3+
### Cookbooks
- None
## Recipes
### default
This recipe manually updates the timestamp file used to only run `apt-get update` if the cache is more than one day old.
This recipe should appear first in the run list of Debian or Ubuntu nodes to ensure that the package cache is up to date before managing any `package` resources with Chef.
This recipe also sets up a local cache directory for preseeding packages.
**Including the default recipe on a node that does not support apt (such as Windows or RHEL) results in a noop.**
### cacher-client
Configures the node to use a `apt-cacher-ng` server to cache apt requests. Configuration of the server to use is located in `default['apt']['cacher_client']['cacher_server']` which is a hash containing `host`, `port`, `proxy_ssl`, and `bypass` keys. Example:
```json
{
"apt": {
"cacher_client": {
"cacher_server": {
"host": "cache_server.mycorp.dmz",
"port": 1234,
"proxy_ssl": true,
"cache_bypass": {
"download.oracle.com": "http"
}
}
}
}
}
```
#### Bypassing the cache
Occasionally you may come across repositories that do not play nicely when the node is using an `apt-cacher-ng` server. You can configure `cacher-client` to bypass the server and connect directly to the repository with the `cache_bypass` attribute.
To do this, you need to override the `cache_bypass` attribute with an hash of repositories, with each key as the repository URL and value as the protocol to use:
```json
{
"apt": {
"cacher_client": {
"cacher_server": {
"cache_bypass": {
"URL": "PROTOCOL"
}
}
}
}
}
```
For example, to prevent caching and directly connect to the repository at `download.oracle.com` via http and the repo at `nginx.org` via https
```json
{
"apt": {
"cacher_client": {
"cacher_server": {
"cache_bypass": {
"download.oracle.com": "http",
"nginx.org": "https"
}
}
}
}
}
```
### cacher-ng
Installs the `apt-cacher-ng` package and service so the system can provide APT caching. You can check the usage report at <http://{hostname}:3142/acng-report.html>.
If you wish to help the `cacher-ng` recipe seed itself, you must now explicitly include the `cacher-client` recipe in your run list **after** `cacher-ng` or you will block your ability to install any packages (ie. `apt-cacher-ng`).
### unattended-upgrades
Installs and configures the `unattended-upgrades` package to provide automatic package updates. This can be configured to upgrade all packages or to just install security updates by setting `['apt']['unattended_upgrades']['allowed_origins']`.
To pull just security updates, set `origins_patterns` to something like `["origin=Ubuntu,archive=trusty-security"]` (for Ubuntu trusty) or `["origin=Debian,label=Debian-Security"]` (for Debian).
## Attributes
### General
- `['apt']['compile_time_update']` - force the default recipe to run `apt-get update` at compile time.
- `['apt']['periodic_update_min_delay']` - minimum delay (in seconds) between two actual executions of `apt-get update` by the `execute[apt-get-update-periodic]` resource, default is '86400' (24 hours)
### Caching
- `['apt']['cacher_client']['cacher_server']` - Hash containing server information used by clients for caching. See the example in the recipes section above for the full format of the hash.
- `['apt']['cacher_interface']` - interface to connect to the cacher-ng service, no default.
- `['apt']['cacher_port']` - port for the cacher-ng service (used by server recipe only), default is '3142'
- `['apt']['cacher_dir']` - directory used by cacher-ng service, default is '/var/cache/apt-cacher-ng'
- `['apt']['compiletime']` - force the `cacher-client` recipe to run before other recipes. It forces apt to use the proxy before other recipes run. Useful if your nodes have limited access to public apt repositories. This is overridden if the `cacher-ng` recipe is in your run list. Default is 'false'
### Unattended Upgrades
- `['apt']['unattended_upgrades']['enable']` - enables unattended upgrades, default is false
- `['apt']['unattended_upgrades']['update_package_lists']` - automatically update package list (`apt-get update`) daily, default is true
- `['apt']['unattended_upgrades']['allowed_origins']` - array of allowed apt origins from which to pull automatic upgrades, defaults to a guess at the system's main origin and should almost always be overridden
- `['apt']['unattended_upgrades']['origins_patterns']` - array of allowed apt origin patterns from which to pull automatic upgrades, defaults to none.
- `['apt']['unattended_upgrades']['package_blacklist']` - an array of package which should never be automatically upgraded, defaults to none
- `['apt']['unattended_upgrades']['auto_fix_interrupted_dpkg']` - attempts to repair dpkg state with `dpkg --force-confold --configure -a` if it exits uncleanly, defaults to false (contrary to the unattended-upgrades default)
- `['apt']['unattended_upgrades']['minimal_steps']` - Split the upgrade into the smallest possible chunks. This makes the upgrade a bit slower but it has the benefit that shutdown while a upgrade is running is possible (with a small delay). Defaults to false.
- `['apt']['unattended_upgrades']['install_on_shutdown']` - Install upgrades when the machine is shuting down instead of doing it in the background while the machine is running. This will (obviously) make shutdown slower. Defaults to false.
- `['apt']['unattended_upgrades']['mail']` - Send email to this address for problems or packages upgrades. Defaults to no email.
- `['apt']['unattended_upgrades']['sender']` - Send email from this address for problems or packages upgrades. Defaults to 'root'.
- `['apt']['unattended_upgrades']['mail_only_on_error']` - If set, email will only be set on upgrade errors. Otherwise, an email will be sent after each upgrade. Defaults to true.
- `['apt']['unattended_upgrades']['remove_unused_dependencies']` Do automatic removal of new unused dependencies after the upgrade. Defaults to false.
- `['apt']['unattended_upgrades']['automatic_reboot']` - Automatically reboots _without confirmation_ if a restart is required after the upgrade. Defaults to false.
- `['apt']['unattended_upgrades']['dl_limit']` - Limits the bandwidth used by apt to download packages. Value given as an integer in kb/sec. Defaults to nil (no limit).
- `['apt']['unattended_upgrades']['random_sleep']` - Wait a random number of seconds up to this value before running daily periodic apt actions. System default is 1800 seconds (30 minutes).
- `['apt']['unattended_upgrades']['syslog_enable']` - Enable logging to syslog. Defaults to false.
- `['apt']['unattended_upgrades']['syslog_facility']` - Specify syslog facility. Defaults to 'daemon'.
- `['apt']['unattended_upgrades']['dpkg_options']` An array of dpkg options to be used specifically only for unattended upgrades. Defaults to `[]` which will prevent it from being rendered from the template in the resulting file.
### Configuration for APT
- `['apt']['confd']['force_confask']` - Prompt when overwriting configuration files. (default: false)
- `['apt']['confd']['force_confdef']` - Don't prompt when overwriting configuration files. (default: false)
- `['apt']['confd']['force_confmiss']` - Install removed configuration files when upgrading packages. (default: false)
- `['apt']['confd']['force_confnew']` - Overwrite configuration files when installing packages. (default: false)
- `['apt']['confd']['force_confold']` - Keep modified configuration files when installing packages. (default: false)
- `['apt']['confd']['install_recommends']` - Consider recommended packages as a dependency for installing. (default: true)
- `['apt']['confd']['install_suggests']` - Consider suggested packages as a dependency for installing. (default: false)
## Libraries
There is an `interface_ipaddress` method that returns the IP address for a particular host and interface, used by the `cacher-client` recipe. To enable it on the server use the `['apt']['cacher_interface']` attribute.
## Usage
Put `recipe[apt]` first in the run list. If you have other recipes that you want to use to configure how apt behaves, like new sources, notify the execute resource to run, e.g.:
```ruby
template '/etc/apt/sources.list.d/my_apt_sources.list' do
notifies :run, 'execute[apt-get update]', :immediately
end
```
The above will run during execution phase since it is a normal template resource, and should appear before other package resources that need the sources in the template.
Put `recipe[apt::cacher-ng]` in the run_list for a server to provide APT caching and add `recipe[apt::cacher-client]` on the rest of the Debian-based nodes to take advantage of the caching server.
If you want to cleanup unused packages, there is also the `apt-get autoclean` and `apt-get autoremove` resources provided for automated cleanup.
## Resources
### apt_preference
The apt_preference resource has been moved into chef-client in Chef 13.3.
See <https://docs.chef.io/resource_apt_preference.html> for usage details
### apt_repository
The apt_repository resource has been moved into chef-client in Chef 12.9.
See <https://docs.chef.io/resource_apt_repository.html> for usage details
### apt_update
The apt_update resource has been moved into chef-client in Chef 12.7.
See <https://docs.chef.io/resource_apt_update.html> for usage details
## Maintainers
This cookbook is maintained by Chef's Community Cookbook Engineering team. Our goal is to improve cookbook quality and to aid the community in contributing to cookbooks. To learn more about our team, process, and design goals see our [team documentation](https://github.com/chef-cookbooks/community_cookbook_documentation/blob/master/COOKBOOK_TEAM.MD). To learn more about contributing to cookbooks like this see our [contributing documentation](https://github.com/chef-cookbooks/community_cookbook_documentation/blob/master/CONTRIBUTING.MD), or if you have general questions about this cookbook come chat with us in #cookbok-engineering on the [Chef Community Slack](http://community-slack.chef.io/)
## License
**Copyright:** 2009-2017, Chef Software, Inc.
```
Licensed under the Apache License, Version 2.0 (the "License");
you may not use this file except in compliance with the License.
You may obtain a copy of the License at
http://www.apache.org/licenses/LICENSE-2.0
Unless required by applicable law or agreed to in writing, software
distributed under the License is distributed on an "AS IS" BASIS,
WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
See the License for the specific language governing permissions and
limitations under the License.
```
[cookbook]: https://community.chef.io/cookbooks/apt
[travis]: https://travis-ci.org/chef-cookbooks/apt

View File

@ -0,0 +1,62 @@
#
# Cookbook:: apt
# Attributes:: default
#
# Copyright:: 2009-2017, Chef Software, Inc.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
#
default['apt']['cacher_dir'] = '/var/cache/apt-cacher-ng'
default['apt']['cacher_interface'] = nil
default['apt']['cacher_port'] = 3142
default['apt']['compiletime'] = false
default['apt']['compile_time_update'] = false
default['apt']['key_proxy'] = ''
default['apt']['periodic_update_min_delay'] = 86_400
default['apt']['launchpad_api_version'] = '1.0'
default['apt']['unattended_upgrades']['enable'] = false
default['apt']['unattended_upgrades']['update_package_lists'] = true
# this needs a good default
codename = node.attribute?('lsb') ? node['lsb']['codename'] : 'notlinux'
default['apt']['unattended_upgrades']['allowed_origins'] = [
"#{node['platform'].capitalize} #{codename}",
]
default['apt']['cacher_client']['cacher_server'] = {}
default['apt']['unattended_upgrades']['origins_patterns'] = []
default['apt']['unattended_upgrades']['package_blacklist'] = []
default['apt']['unattended_upgrades']['auto_fix_interrupted_dpkg'] = false
default['apt']['unattended_upgrades']['minimal_steps'] = false
default['apt']['unattended_upgrades']['install_on_shutdown'] = false
default['apt']['unattended_upgrades']['mail'] = nil
default['apt']['unattended_upgrades']['sender'] = nil
default['apt']['unattended_upgrades']['mail_only_on_error'] = true
default['apt']['unattended_upgrades']['remove_unused_dependencies'] = false
default['apt']['unattended_upgrades']['automatic_reboot'] = false
default['apt']['unattended_upgrades']['automatic_reboot_time'] = 'now'
default['apt']['unattended_upgrades']['dl_limit'] = nil
default['apt']['unattended_upgrades']['random_sleep'] = nil
default['apt']['unattended_upgrades']['syslog_enable'] = false
default['apt']['unattended_upgrades']['syslog_facility'] = 'daemon'
default['apt']['unattended_upgrades']['dpkg_options'] = []
default['apt']['confd']['force_confask'] = false
default['apt']['confd']['force_confdef'] = false
default['apt']['confd']['force_confmiss'] = false
default['apt']['confd']['force_confnew'] = false
default['apt']['confd']['force_confold'] = false
default['apt']['confd']['install_recommends'] = true
default['apt']['confd']['install_suggests'] = false

View File

@ -0,0 +1 @@
APT::Update::Post-Invoke-Success {"touch /var/lib/apt/periodic/update-success-stamp 2>/dev/null || true";};

View File

@ -0,0 +1,50 @@
[DEFAULT]
;; All times are in seconds, but you can add a suffix
;; for minutes(m), hours(h) or days(d)
;; commented out address so apt-proxy will listen on all IPs
;; address = 127.0.0.1
port = 9999
cache_dir = /var/cache/apt-proxy
;; Control files (Packages/Sources/Contents) refresh rate
min_refresh_delay = 1s
complete_clientless_downloads = 1
;; Debugging settings.
debug = all:4 db:0
time = 30
passive_ftp = on
;;--------------------------------------------------------------
;; Cache housekeeping
cleanup_freq = 1d
max_age = 120d
max_versions = 3
;;---------------------------------------------------------------
;; Backend servers
;;
;; Place each server in its own [section]
[ubuntu]
; Ubuntu archive
backends =
http://us.archive.ubuntu.com/ubuntu
[ubuntu-security]
; Ubuntu security updates
backends = http://security.ubuntu.com/ubuntu
[debian]
;; Backend servers, in order of preference
backends =
http://debian.osuosl.org/debian/
[security]
;; Debian security archive
backends =
http://security.debian.org/debian-security
http://ftp2.de.debian.org/debian-security

View File

@ -0,0 +1,49 @@
#
# Cookbook:: apt
# Library:: helpers
#
# Copyright:: 2013-2017, Chef Software, Inc.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
#
module Apt
# Helpers for apt
module Helpers
# Determines if apt is installed on a system.
#
# @return [Boolean]
def apt_installed?
!which('apt-get').nil?
end
# Finds a command in $PATH
#
# @return [String, nil]
def which(cmd)
ENV['PATH'] = '' if ENV['PATH'].nil?
paths = (ENV['PATH'].split(::File::PATH_SEPARATOR) + %w(/bin /usr/bin /sbin /usr/sbin))
paths.each do |path|
possible = File.join(path, cmd)
return possible if File.executable?(possible)
end
nil
end
end
end
Chef::Recipe.send(:include, ::Apt::Helpers)
Chef::Resource.send(:include, ::Apt::Helpers)
Chef::Provider.send(:include, ::Apt::Helpers)

File diff suppressed because one or more lines are too long

View File

@ -0,0 +1,15 @@
name 'apt'
maintainer 'Chef Software, Inc.'
maintainer_email 'cookbooks@chef.io'
license 'Apache-2.0'
description 'Configures apt and apt caching.'
long_description IO.read(File.join(File.dirname(__FILE__), 'README.md'))
version '7.2.0'
%w(ubuntu debian).each do |os|
supports os
end
source_url 'https://github.com/chef-cookbooks/apt'
issues_url 'https://github.com/chef-cookbooks/apt/issues'
chef_version '>= 13.3'

View File

@ -0,0 +1,52 @@
#
# Cookbook:: apt
# Recipe:: cacher-client
#
# Copyright:: 2011-2017, Chef Software, Inc.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
#
# remove Acquire::http::Proxy lines from /etc/apt/apt.conf since we use 01proxy
# these are leftover from preseed installs
execute 'Remove proxy from /etc/apt/apt.conf' do
command "sed --in-place '/^Acquire::http::Proxy/d' /etc/apt/apt.conf"
only_if 'grep Acquire::http::Proxy /etc/apt/apt.conf'
end
if node['apt']['cacher_client']['cacher_server'].empty?
Chef::Log.warn("No cache server defined in node['apt']['cacher_client']['cacher_server']. Not setting up caching")
f = file '/etc/apt/apt.conf.d/01proxy' do
action(node['apt']['compiletime'] ? :nothing : :delete)
end
f.run_action(:delete) if node['apt']['compiletime']
else
apt_update 'update for notification' do
action :nothing
end
t = template '/etc/apt/apt.conf.d/01proxy' do
source '01proxy.erb'
owner 'root'
group 'root'
mode '0644'
variables(
server: node['apt']['cacher_client']['cacher_server']
)
action(node['apt']['compiletime'] ? :nothing : :create)
notifies :update, 'apt_update[update for notification]', :immediately
end
t.run_action(:create) if node['apt']['compiletime']
end
include_recipe 'apt::default'

View File

@ -0,0 +1,39 @@
#
# Cookbook:: apt
# Recipe:: cacher-ng
#
# Copyright:: 2008-2017, Chef Software, Inc.
#
# Licensed under the Apache License, Version 2.0 (the 'License');
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an 'AS IS' BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
#
package 'apt-cacher-ng'
directory node['apt']['cacher_dir'] do
owner 'apt-cacher-ng'
group 'apt-cacher-ng'
mode '0755'
end
template '/etc/apt-cacher-ng/acng.conf' do
source 'acng.conf.erb'
owner 'root'
group 'root'
mode '0644'
notifies :restart, 'service[apt-cacher-ng]', :immediately
end
service 'apt-cacher-ng' do
supports restart: true, status: false
action [:enable, :start]
end

View File

@ -0,0 +1,98 @@
#
# Cookbook:: apt
# Recipe:: default
#
# Copyright:: 2008-2017, Chef Software, Inc.
# Copyright:: 2009-2017, Bryan McLellan <btm@loftninjas.org>
#
# Licensed under the Apache License, Version 2.0 (the 'License');
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an 'AS IS' BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
#
# On systems where apt is not installed, the resources in this recipe are not
# executed. However, they _must_ still be present in the resource collection
# or other cookbooks which notify these resources will fail on non-apt-enabled
# systems.
file '/var/lib/apt/periodic/update-success-stamp' do
owner 'root'
group 'root'
action :nothing
end
# If compile_time_update run apt-get update at compile time
if node['apt']['compile_time_update'] && apt_installed?
apt_update('compile time') do
frequency node['apt']['periodic_update_min_delay']
ignore_failure true
end.run_action(:periodic)
end
apt_update 'periodic' do
frequency node['apt']['periodic_update_min_delay']
end
# For other recipes to call to force an update
execute 'apt-get update' do
command 'apt-get update'
ignore_failure true
action :nothing
notifies :touch, 'file[/var/lib/apt/periodic/update-success-stamp]', :immediately
only_if { apt_installed? }
end
# Automatically remove packages that are no longer needed for dependencies
execute 'apt-get autoremove' do
command 'apt-get -y autoremove'
environment(
'DEBIAN_FRONTEND' => 'noninteractive'
)
action :nothing
only_if { apt_installed? }
end
# Automatically remove .deb files for packages no longer on your system
execute 'apt-get autoclean' do
command 'apt-get -y autoclean'
action :nothing
only_if { apt_installed? }
end
%w(/var/cache/local /var/cache/local/preseeding).each do |dirname|
directory dirname do
owner 'root'
group 'root'
mode '0755'
action :create
only_if { apt_installed? }
end
end
template '/etc/apt/apt.conf.d/10dpkg-options' do
owner 'root'
group 'root'
mode '0644'
source '10dpkg-options.erb'
only_if { apt_installed? }
end
template '/etc/apt/apt.conf.d/10recommends' do
owner 'root'
group 'root'
mode '0644'
source '10recommends.erb'
only_if { apt_installed? }
end
package %w(apt-transport-https gnupg dirmngr) do
only_if { apt_installed? }
end

View File

@ -0,0 +1,47 @@
#
# Cookbook:: apt
# Recipe:: unattended-upgrades
#
# Copyright:: 2014-2017, Chef Software, Inc.
#
# Licensed under the Apache License, Version 2.0 (the 'License');
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an 'AS IS' BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
#
# On systems where apt is not installed, the resources in this recipe are not
# executed. However, they _must_ still be present in the resource collection
# or other cookbooks which notify these resources will fail on non-apt-enabled
# systems.
#
package 'unattended-upgrades' do # ~FC009
response_file 'unattended-upgrades.seed.erb'
action :install
end
package 'bsd-mailx' do
not_if { node['apt']['unattended_upgrades']['mail'].nil? }
end
template '/etc/apt/apt.conf.d/20auto-upgrades' do
owner 'root'
group 'root'
mode '0644'
source '20auto-upgrades.erb'
end
template '/etc/apt/apt.conf.d/50unattended-upgrades' do
owner 'root'
group 'root'
mode '0644'
source '50unattended-upgrades.erb'
end

View File

@ -0,0 +1,11 @@
Acquire::http::Proxy "http://<%= @server['host'] %>:<%= @server['port'] %>";
<% if @server['proxy_ssl'] %>
Acquire::https::Proxy "http://<%= @server['host'] %>:<%= @server['port'] %>";
<% else %>
Acquire::https::Proxy "DIRECT";
<% end %>
<% unless @server['cache_bypass'].nil? %>
<% @server['cache_bypass'].each do |bypass, type| %>
Acquire::<%= type %>::Proxy::<%= bypass %> "DIRECT";
<% end %>
<% end %>

View File

@ -0,0 +1,8 @@
# Managed by Chef
DPkg::Options {
<%= node['apt']['confd']['force_confask'] ? '"--force-confask";' : '' -%>
<%= node['apt']['confd']['force_confdef'] ? '"--force-confdef";' : '' -%>
<%= node['apt']['confd']['force_confmiss'] ? '"--force-confmiss";' : '' -%>
<%= node['apt']['confd']['force_confnew'] ? '"--force-confnew";' : '' -%>
<%= node['apt']['confd']['force_confold'] ? '"--force-confold";' : '' -%>
}

View File

@ -0,0 +1,3 @@
# Managed by Chef
APT::Install-Recommends "<%= node['apt']['confd']['install_recommends'] ? 1 : 0 %>";
APT::Install-Suggests "<%= node['apt']['confd']['install_suggests'] ? 1 : 0 %>";

View File

@ -0,0 +1,5 @@
APT::Periodic::Update-Package-Lists "<%= node['apt']['unattended_upgrades']['update_package_lists'] ? 1 : 0 %>";
APT::Periodic::Unattended-Upgrade "<%= node['apt']['unattended_upgrades']['enable'] ? 1 : 0 %>";
<% if node['apt']['unattended_upgrades']['random_sleep'] -%>
APT::Periodic::RandomSleep "<%= node['apt']['unattended_upgrades']['random_sleep'] %>";
<% end -%>

View File

@ -0,0 +1,104 @@
// Automatically upgrade packages from these (origin:archive) pairs
Unattended-Upgrade::Allowed-Origins {
<% unless node['apt']['unattended_upgrades']['allowed_origins'].empty? -%>
<% node['apt']['unattended_upgrades']['allowed_origins'].each do |origin| -%>
"<%= origin %>";
<% end -%>
<% end -%>
};
<% unless node['apt']['unattended_upgrades']['origins_patterns'].empty? -%>
Unattended-Upgrade::Origins-Pattern {
<% node['apt']['unattended_upgrades']['origins_patterns'].each do |pattern| -%>
"<%= pattern %>";
<% end -%>
};
<% end -%>
// List of packages to not update
Unattended-Upgrade::Package-Blacklist {
<% unless node['apt']['unattended_upgrades']['package_blacklist'].empty? -%>
<% node['apt']['unattended_upgrades']['package_blacklist'].each do |package| -%>
"<%= package %>";
<% end -%>
<% end -%>
};
// This option allows you to control if on a unclean dpkg exit
// unattended-upgrades will automatically run
// dpkg --force-confold --configure -a
// The default is true, to ensure updates keep getting installed
Unattended-Upgrade::AutoFixInterruptedDpkg "<%= node['apt']['unattended_upgrades']['auto_fix_interrupted_dpkg'] ? 'true' : 'false' %>";
// Split the upgrade into the smallest possible chunks so that
// they can be interrupted with SIGUSR1. This makes the upgrade
// a bit slower but it has the benefit that shutdown while a upgrade
// is running is possible (with a small delay)
Unattended-Upgrade::MinimalSteps "<%= node['apt']['unattended_upgrades']['minimal_steps'] ? 'true' : 'false' %>";
// Install all unattended-upgrades when the machine is shuting down
// instead of doing it in the background while the machine is running
// This will (obviously) make shutdown slower
Unattended-Upgrade::InstallOnShutdown "<%= node['apt']['unattended_upgrades']['install_on_shutdown'] ? 'true' : 'false' %>";
<% if node['apt']['unattended_upgrades']['mail'] -%>
// Send email to this address for problems or packages upgrades
// If empty or unset then no email is sent, make sure that you
// have a working mail setup on your system. A package that provides
// 'mailx' must be installed.
Unattended-Upgrade::Mail "<%= node['apt']['unattended_upgrades']['mail'] %>";
<% end -%>
<% if node['apt']['unattended_upgrades']['sender'] -%>
// This option allows to customize the email address used in the
// 'From' header. unattended-upgrades will use "root" if unset.
Unattended-Upgrade::Sender "<%= node['apt']['unattended_upgrades']['sender'] %>";
<% end -%>
// Set this value to "true" to get emails only on errors. Default
// is to always send a mail if Unattended-Upgrade::Mail is set
Unattended-Upgrade::MailOnlyOnError "<%= node['apt']['unattended_upgrades']['mail_only_on_error'] ? 'true' : 'false' %>";
// Do automatic removal of new unused dependencies after the upgrade
// (equivalent to apt-get autoremove)
Unattended-Upgrade::Remove-Unused-Dependencies "<%= node['apt']['unattended_upgrades']['remove_unused_dependencies'] ? 'true' : 'false' %>";
// Automatically reboot *WITHOUT CONFIRMATION* if a
// the file /var/run/reboot-required is found after the upgrade
Unattended-Upgrade::Automatic-Reboot "<%= node['apt']['unattended_upgrades']['automatic_reboot'] ? 'true' : 'false' %>";
<% if node['apt']['unattended_upgrades']['automatic_reboot'] -%>
// If automatic reboot is enabled and needed, reboot at the specific
// time instead of immediately. Default is "now"
Unattended-Upgrade::Automatic-Reboot-Time "<%= node['apt']['unattended_upgrades']['automatic_reboot_time'] %>";
<% end %>
<% if node['apt']['unattended_upgrades']['dl_limit'] -%>
// Use apt bandwidth limit feature, this example limits the download
// speed to 70kb/sec
// Acquire::http::Dl-Limit "70";
Acquire::http::Dl-Limit "<%= node['apt']['unattended_upgrades']['dl_limit'] %>";
<% end -%>
// Enable logging to syslog. Default is False
Unattended-Upgrade::SyslogEnable "<%= node['apt']['unattended_upgrades']['syslog_enable'] ? 'true' : 'false' %>";
// Specify syslog facility. Default is daemon
Unattended-Upgrade::SyslogFacility "<%= node['apt']['unattended_upgrades']['syslog_facility'] %>";
// specify any dpkg options you want to run
// for example if you wanted to upgrade and use
// the installed version of config files when
// resolving conflicts during an upgrade you
// typically need:
// Dpkg::Options {
// "--force-confdef";
// "--force-confold";
//};
<% unless node['apt']['unattended_upgrades']['dpkg_options'].empty? -%>
Dpkg::Options {
<% node['apt']['unattended_upgrades']['dpkg_options'].each do |option|%>
"<%= option %>";
<% end -%>
};
<% end -%>

View File

@ -0,0 +1,275 @@
# Letter case in directive names does not matter. Must be separated with colons.
# Valid boolean values are a zero number for false, non-zero numbers for true.
CacheDir: <%= node['apt']['cacher_dir'] %>
# set empty to disable logging
LogDir: /var/log/apt-cacher-ng
# place to look for additional configuration and resource files if they are not
# found in the configuration directory
# SupportDir: /usr/lib/apt-cacher-ng
# TCP (http) port
# Set to 9999 to emulate apt-proxy
Port:<%= node['apt']['cacher_port'] %>
# Addresses or hostnames to listen on. Multiple addresses must be separated by
# spaces. Each entry must be an exact local address which is associated with a
# local interface. DNS resolution is performed using getaddrinfo(3) for all
# available protocols (IPv4, IPv6, ...). Using a protocol specific format will
# create binding(s) only on protocol specific socket(s) (e.g. 0.0.0.0 will listen
# only to IPv4).
#
# Default: not set, will listen on all interfaces and protocols
#
# BindAddress: localhost 192.168.7.254 publicNameOnMainInterface
# The specification of another proxy which shall be used for downloads.
# Username and password are, and see manual for limitations.
#
#Proxy: http://www-proxy.example.net:80
#proxy: username:proxypassword@proxy.example.net:3128
# Repository remapping. See manual for details.
# In this example, some backends files might be generated during package
# installation using information collected on the system.
Remap-debrep: file:deb_mirror*.gz /debian ; file:backends_debian # Debian Archives
Remap-uburep: file:ubuntu_mirrors /ubuntu ; file:backends_ubuntu # Ubuntu Archives
Remap-debvol: file:debvol_mirror*.gz /debian-volatile ; file:backends_debvol # Debian Volatile Archives
Remap-cygwin: file:cygwin_mirrors /cygwin # ; file:backends_cygwin # incomplete, please create this file or specify preferred mirrors here
Remap-sfnet: file:sfnet_mirrors # ; file:backends_sfnet # incomplete, please create this file or specify preferred mirrors here
Remap-alxrep: file:archlx_mirrors /archlinux # ; file:backend_archlx # Arch Linux
Remap-fedora: file:fedora_mirrors # Fedora Linux
Remap-epel: file:epel_mirrors # Fedora EPEL
Remap-slrep: file:sl_mirrors # Scientific Linux
# This is usually not needed for security.debian.org because it's always the
# same DNS hostname. However, it might be enabled in order to use hooks,
# ForceManaged mode or special flags in this context.
# Remap-secdeb: security.debian.org
# Virtual page accessible in a web browser to see statistics and status
# information, i.e. under http://localhost:3142/acng-report.html
ReportPage: acng-report.html
# Socket file for accessing through local UNIX socket instead of TCP/IP. Can be
# used with inetd bridge or cron client.
# SocketPath:/var/run/apt-cacher-ng/socket
# Forces log file to be written to disk after every line when set to 1. Default
# is 0, buffers are flushed when the client disconnects.
#
# (technically, alias to the Debug option, see its documentation for details)
#
# UnbufferLogs: 0
# Set to 0 to store only type, time and transfer sizes.
# 1 -> client IP and relative local path are logged too
# VerboseLog: 1
# Don't detach from the console
# ForeGround: 0
# Store the pid of the daemon process therein
# PidFile: /var/run/apt-cacher-ng/pid
# Forbid outgoing connections, work around them or respond with 503 error
# offlinemode:0
# Forbid all downloads that don't run through preconfigured backends (.where)
#ForceManaged: 0
# Days before considering an unreferenced file expired (to be deleted).
# Warning: if the value is set too low and particular index files are not
# available for some days (mirror downtime) there is a risk of deletion of
# still useful package files.
ExTreshold: 4
# Stop expiration when a critical problem appeared. Currently only failed
# refresh of an index file is considered as critical.
#
# WARNING: don't touch this option or set to zero.
# Anything else is DANGEROUS and may cause data loss.
#
# ExAbortOnProblems: 1
# Replace some Windows/DOS-FS incompatible chars when storing
# StupidFs: 0
# Experimental feature for apt-listbugs: pass-through SOAP requests and
# responses to/from bugs.debian.org. If not set, default is true if
# ForceManaged is enabled and false otherwise.
# ForwardBtsSoap: 1
# The daemon has a small cache for DNS data, to speed up resolution. The
# expiration time of the DNS entries can be configured in seconds.
# DnsCacheSeconds: 3600
# Don't touch the following values without good consideration!
#
# Max. count of connection threads kept ready (for faster response in the
# future). Should be a sane value between 0 and average number of connections,
# and depend on the amount of spare RAM.
# MaxStandbyConThreads: 8
#
# Hard limit of active thread count for incoming connections, i.e. operation
# is refused when this value is reached (below zero = unlimited).
# MaxConThreads: -1
#
# Pigeonholing files with regular expressions (static/volatile). Can be
# overriden here but not should not be done permanently because future update
# of default settings would not be applied later.
# VfilePattern = (^|.*?/)(Index|Packages(\.gz|\.bz2|\.lzma|\.xz)?|InRelease|Release|Release\.gpg|Sources(\.gz|\.bz2|\.lzma|\.xz)?|release|index\.db-.*\.gz|Contents-[^/]*(\.gz|\.bz2|\.lzma|\.xz)?|pkglist[^/]*\.bz2|rclist[^/]*\.bz2|/meta-release[^/]*|Translation[^/]*(\.gz|\.bz2|\.lzma|\.xz)?|MD5SUMS|SHA1SUMS|((setup|setup-legacy)(\.ini|\.bz2|\.hint)(\.sig)?)|mirrors\.lst|repo(index|md)\.xml(\.asc|\.key)?|directory\.yast|products|content(\.asc|\.key)?|media|filelists\.xml\.gz|filelists\.sqlite\.bz2|repomd\.xml|packages\.[a-zA-Z][a-zA-Z]\.gz|info\.txt|license\.tar\.gz|license\.zip|.*\.db(\.tar\.gz)?|.*\.files\.tar\.gz|.*\.abs\.tar\.gz|metalink\?repo|.*prestodelta\.xml\.gz)$|/dists/.*/installer-[^/]+/[^0-9][^/]+/images/.*
# PfilePattern = .*(\.d?deb|\.rpm|\.dsc|\.tar(\.gz|\.bz2|\.lzma|\.xz)(\.gpg)?|\.diff(\.gz|\.bz2|\.lzma|\.xz)|\.jigdo|\.template|changelog|copyright|\.udeb|\.debdelta|\.diff/.*\.gz|(Devel)?ReleaseAnnouncement(\?.*)?|[a-f0-9]+-(susedata|updateinfo|primary|deltainfo).xml.gz|fonts/(final/)?[a-z]+32.exe(\?download.*)?|/dists/.*/installer-[^/]+/[0-9][^/]+/images/.*)$
# Whitelist for expiration, file types not to be removed even when being
# unreferenced. Default: many parts from VfilePattern where no parent index
# exists or might be unknown.
# WfilePattern = (^|.*?/)(Release|InRelease|Release\.gpg|(Packages|Sources)(\.gz|\.bz2|\.lzma|\.xz)?|Translation[^/]*(\.gz|\.bz2|\.lzma|\.xz)?|MD5SUMS|SHA1SUMS|.*\.xml|.*\.db\.tar\.gz|.*\.files\.tar\.gz|.*\.abs\.tar\.gz|[a-z]+32.exe)$|/dists/.*/installer-.*/images/.*
# Higher modes only working with the debug version
# Warning, writes a lot into apt-cacher.err logfile
# Value overwrites UnbufferLogs setting (aliased)
# Debug:3
# Usually, general purpose proxies like Squid expose the IP address of the
# client user to the remote server using the X-Forwarded-For HTTP header. This
# behaviour can be optionally turned on with the Expose-Origin option.
# ExposeOrigin: 0
# When logging the originating IP address, trust the information supplied by
# the client in the X-Forwarded-For header.
# LogSubmittedOrigin: 0
# The version string reported to the peer, to be displayed as HTTP client (and
# version) in the logs of the mirror.
# WARNING: some archives use this header to detect/guess capabilities of the
# client (i.e. redirection support) and change the behaviour accordingly, while
# ACNG might not support the expected features. Expect side effects.
#
# UserAgent: Yet Another HTTP Client/1.2.3p4
# In some cases the Import and Expiration tasks might create fresh volatile
# data for internal use by reconstructing them using patch files. This
# by-product might be recompressed with bzip2 and with some luck the resulting
# file becomes identical to the *.bz2 file on the server, usable for APT
# clients trying to fetch the full .bz2 compressed version. Injection of the
# generated files into the cache has however a disadvantage on underpowered
# servers: bzip2 compression can create high load on the server system and the
# visible download of the busy .bz2 files also becomes slower.
#
# RecompBz2: 0
# Network timeout for outgoing connections.
# NetworkTimeout: 60
# Sometimes it makes sense to not store the data in cache and just return the
# package data to client as it comes in. DontCache parameters can enable this
# behaviour for certain URL types. The tokens are extended regular expressions
# that URLs are matched against.
#
# DontCacheRequested is applied to the URL as it comes in from the client.
# Example: exclude packages built with kernel-package for x86
# DontCacheRequested: linux-.*_10\...\.Custo._i386
# Example usecase: exclude popular private IP ranges from caching
# DontCacheRequested: 192.168.0 ^10\..* 172.30
#
# DontCacheResolved is applied to URLs after mapping to the target server. If
# multiple backend servers are specified then it's only matched against the
# download link for the FIRST possible source (due to implementation limits).
# Example usecase: all Ubuntu stuff comes from a local mirror (specified as
# backend), don't cache it again:
# DontCacheResolved: ubuntumirror.local.net
#
# DontCache directive sets (overrides) both, DontCacheResolved and
# DontCacheRequested. Provided for convenience, see those directives for
# details.
#
# Default permission set of freshly created files and directories, as octal
# numbers (see chmod(1) for details).
# Can by limited by the umask value (see umask(2) for details) if it's set in
# the environment of the starting shell, e.g. in apt-cacher-ng init script or
# in its configuration file.
# DirPerms: 00755
# FilePerms: 00664
#
#
# It's possible to use use apt-cacher-ng as a regular web server with limited
# feature set, i.e.
# including directory browsing and download of any file;
# excluding sorting, mime types/encodings, CGI execution, index page
# redirection and other funny things.
# To get this behavior, mappings between virtual directories and real
# directories on the server must be defined with the LocalDirs directive.
# Virtual and real dirs are separated by spaces, multiple pairs are separated
# by semi-colons. Real directories must be absolute paths.
# NOTE: Since the names of that key directories share the same namespace as
# repository names (see Remap-...) it's administrators job to avoid such
# collisions on them (unless created deliberately).
#
# LocalDirs: woo /data/debarchive/woody ; hamm /data/debarchive/hamm
# Precache a set of files referenced by specified index files. This can be used
# to create a partial mirror usable for offline work. There are certain limits
# and restrictions on the path specification, see manual for details. A list of
# (maybe) relevant index files could be retrieved via
# "apt-get --print-uris update" on a client machine.
#
# PrecacheFor: debrep/dists/unstable/*/source/Sources* debrep/dists/unstable/*/binary-amd64/Packages*
# Arbitrary set of data to append to request headers sent over the wire. Should
# be a well formated HTTP headers part including newlines (DOS style) which
# can be entered as escape sequences (\r\n).
# RequestAppendix: X-Tracking-Choice: do-not-track\r\n
# Specifies the IP protocol families to use for remote connections. Order does
# matter, first specified are considered first. Possible combinations:
# v6 v4
# v4 v6
# v6
# v4
# (empty or not set: use system default)
#
# ConnectProto: v6 v4
# Regular expiration algorithm finds package files which are no longer listed
# in any index file and removes them of them after a safety period.
# This option allows to keep more versions of a package in the cache after
# safety period is over.
# KeepExtraVersions: 1
# Optionally uses TCP access control provided by libwrap, see hosts_access(5)
# for details. Daemon name is apt-cacher-ng. Default if not set: decided on
# startup by looking for explicit mentioning of apt-cacher-ng in
# /etc/hosts.allow or /etc/hosts.deny files.
# UseWrap: 0
# If many machines from the same local network attempt to update index files
# (apt-get update) at nearly the same time, the known state of these index file
# is temporarily frozen and multiple requests receive the cached response
# without contacting the server. This parameter (in seconds) specifies the
# length of this period before the files are considered outdated.
# Setting it too low transfers more data and increases remote server load,
# setting it too high (more than a couple of minutes) increases the risk of
# delivering inconsistent responses to the clients.
# FreshIndexMaxAge: 27
# Usually the users are not allowed to specify custom TCP ports of remote
# mirrors in the requests, only the default HTTP port can be used (instead,
# proxy administrator can create Remap- rules with custom ports). This
# restriction can be disabled by specifying a list of allowed ports or 0 for
# any port.
#
# AllowUserPorts: 80
# Normally the HTTP redirection responses are forwarded to the original caller
# (i.e. APT) which starts a new download attempt from the new URL. This
# solution is ok for client configurations with proxy mode but doesn't work
# well with configurations using URL prefixes. To work around this the server
# can restart its own download with another URL. However, this might be used to
# circumvent download source policies by malicious users.
# The RedirMax option specifies how many such redirects the server should
# follow per request, 0 disables the internal redirection. If not set,
# default value is 0 if ForceManaged is used and 5 otherwise.
#
# RedirMax: 5

View File

@ -0,0 +1 @@
unattended-upgrades unattended-upgrades/enable_auto_updates boolean <%= node['apt']['unattended_upgrades']['enable'] ? 'true' : 'false' %>

View File

@ -1 +0,0 @@
~FC005

View File

@ -1,7 +0,0 @@
source 'https://supermarket.chef.io'
metadata
group :integration do
cookbook 'docker_test', path: 'test/cookbooks/docker_test'
end

View File

@ -2,6 +2,10 @@
This file is used to list changes made in each version of the docker cookbook.
## 4.9.3 (2019-08-14)
- fixes issue #1061, docker_volume 'driver' and 'opts' don't work
## 4.9.2 (2019-02-15)
- Support setting shared memory size.

View File

@ -1,13 +0,0 @@
# This gemfile provides additional gems for testing and releasing this cookbook
# It is meant to be installed on top of ChefDK which provides the majority
# of the necessary gems for testing this cookbook
#
# Run 'chef exec bundle install' to install these dependencies
source 'https://rubygems.org'
gem 'berkshelf'
gem 'community_cookbook_releaser'
gem 'kitchen-dokken'
gem 'kitchen-inspec'
gem 'test-kitchen'

View File

@ -1,202 +0,0 @@
Apache License
Version 2.0, January 2004
http://www.apache.org/licenses/
TERMS AND CONDITIONS FOR USE, REPRODUCTION, AND DISTRIBUTION
1. Definitions.
"License" shall mean the terms and conditions for use, reproduction,
and distribution as defined by Sections 1 through 9 of this document.
"Licensor" shall mean the copyright owner or entity authorized by
the copyright owner that is granting the License.
"Legal Entity" shall mean the union of the acting entity and all
other entities that control, are controlled by, or are under common
control with that entity. For the purposes of this definition,
"control" means (i) the power, direct or indirect, to cause the
direction or management of such entity, whether by contract or
otherwise, or (ii) ownership of fifty percent (50%) or more of the
outstanding shares, or (iii) beneficial ownership of such entity.
"You" (or "Your") shall mean an individual or Legal Entity
exercising permissions granted by this License.
"Source" form shall mean the preferred form for making modifications,
including but not limited to software source code, documentation
source, and configuration files.
"Object" form shall mean any form resulting from mechanical
transformation or translation of a Source form, including but
not limited to compiled object code, generated documentation,
and conversions to other media types.
"Work" shall mean the work of authorship, whether in Source or
Object form, made available under the License, as indicated by a
copyright notice that is included in or attached to the work
(an example is provided in the Appendix below).
"Derivative Works" shall mean any work, whether in Source or Object
form, that is based on (or derived from) the Work and for which the
editorial revisions, annotations, elaborations, or other modifications
represent, as a whole, an original work of authorship. For the purposes
of this License, Derivative Works shall not include works that remain
separable from, or merely link (or bind by name) to the interfaces of,
the Work and Derivative Works thereof.
"Contribution" shall mean any work of authorship, including
the original version of the Work and any modifications or additions
to that Work or Derivative Works thereof, that is intentionally
submitted to Licensor for inclusion in the Work by the copyright owner
or by an individual or Legal Entity authorized to submit on behalf of
the copyright owner. For the purposes of this definition, "submitted"
means any form of electronic, verbal, or written communication sent
to the Licensor or its representatives, including but not limited to
communication on electronic mailing lists, source code control systems,
and issue tracking systems that are managed by, or on behalf of, the
Licensor for the purpose of discussing and improving the Work, but
excluding communication that is conspicuously marked or otherwise
designated in writing by the copyright owner as "Not a Contribution."
"Contributor" shall mean Licensor and any individual or Legal Entity
on behalf of whom a Contribution has been received by Licensor and
subsequently incorporated within the Work.
2. Grant of Copyright License. Subject to the terms and conditions of
this License, each Contributor hereby grants to You a perpetual,
worldwide, non-exclusive, no-charge, royalty-free, irrevocable
copyright license to reproduce, prepare Derivative Works of,
publicly display, publicly perform, sublicense, and distribute the
Work and such Derivative Works in Source or Object form.
3. Grant of Patent License. Subject to the terms and conditions of
this License, each Contributor hereby grants to You a perpetual,
worldwide, non-exclusive, no-charge, royalty-free, irrevocable
(except as stated in this section) patent license to make, have made,
use, offer to sell, sell, import, and otherwise transfer the Work,
where such license applies only to those patent claims licensable
by such Contributor that are necessarily infringed by their
Contribution(s) alone or by combination of their Contribution(s)
with the Work to which such Contribution(s) was submitted. If You
institute patent litigation against any entity (including a
cross-claim or counterclaim in a lawsuit) alleging that the Work
or a Contribution incorporated within the Work constitutes direct
or contributory patent infringement, then any patent licenses
granted to You under this License for that Work shall terminate
as of the date such litigation is filed.
4. Redistribution. You may reproduce and distribute copies of the
Work or Derivative Works thereof in any medium, with or without
modifications, and in Source or Object form, provided that You
meet the following conditions:
(a) You must give any other recipients of the Work or
Derivative Works a copy of this License; and
(b) You must cause any modified files to carry prominent notices
stating that You changed the files; and
(c) You must retain, in the Source form of any Derivative Works
that You distribute, all copyright, patent, trademark, and
attribution notices from the Source form of the Work,
excluding those notices that do not pertain to any part of
the Derivative Works; and
(d) If the Work includes a "NOTICE" text file as part of its
distribution, then any Derivative Works that You distribute must
include a readable copy of the attribution notices contained
within such NOTICE file, excluding those notices that do not
pertain to any part of the Derivative Works, in at least one
of the following places: within a NOTICE text file distributed
as part of the Derivative Works; within the Source form or
documentation, if provided along with the Derivative Works; or,
within a display generated by the Derivative Works, if and
wherever such third-party notices normally appear. The contents
of the NOTICE file are for informational purposes only and
do not modify the License. You may add Your own attribution
notices within Derivative Works that You distribute, alongside
or as an addendum to the NOTICE text from the Work, provided
that such additional attribution notices cannot be construed
as modifying the License.
You may add Your own copyright statement to Your modifications and
may provide additional or different license terms and conditions
for use, reproduction, or distribution of Your modifications, or
for any such Derivative Works as a whole, provided Your use,
reproduction, and distribution of the Work otherwise complies with
the conditions stated in this License.
5. Submission of Contributions. Unless You explicitly state otherwise,
any Contribution intentionally submitted for inclusion in the Work
by You to the Licensor shall be under the terms and conditions of
this License, without any additional terms or conditions.
Notwithstanding the above, nothing herein shall supersede or modify
the terms of any separate license agreement you may have executed
with Licensor regarding such Contributions.
6. Trademarks. This License does not grant permission to use the trade
names, trademarks, service marks, or product names of the Licensor,
except as required for reasonable and customary use in describing the
origin of the Work and reproducing the content of the NOTICE file.
7. Disclaimer of Warranty. Unless required by applicable law or
agreed to in writing, Licensor provides the Work (and each
Contributor provides its Contributions) on an "AS IS" BASIS,
WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or
implied, including, without limitation, any warranties or conditions
of TITLE, NON-INFRINGEMENT, MERCHANTABILITY, or FITNESS FOR A
PARTICULAR PURPOSE. You are solely responsible for determining the
appropriateness of using or redistributing the Work and assume any
risks associated with Your exercise of permissions under this License.
8. Limitation of Liability. In no event and under no legal theory,
whether in tort (including negligence), contract, or otherwise,
unless required by applicable law (such as deliberate and grossly
negligent acts) or agreed to in writing, shall any Contributor be
liable to You for damages, including any direct, indirect, special,
incidental, or consequential damages of any character arising as a
result of this License or out of the use or inability to use the
Work (including but not limited to damages for loss of goodwill,
work stoppage, computer failure or malfunction, or any and all
other commercial damages or losses), even if such Contributor
has been advised of the possibility of such damages.
9. Accepting Warranty or Additional Liability. While redistributing
the Work or Derivative Works thereof, You may choose to offer,
and charge a fee for, acceptance of support, warranty, indemnity,
or other liability obligations and/or rights consistent with this
License. However, in accepting such obligations, You may act only
on Your own behalf and on Your sole responsibility, not on behalf
of any other Contributor, and only if You agree to indemnify,
defend, and hold each Contributor harmless for any liability
incurred by, or claims asserted against, such Contributor by reason
of your accepting any such warranty or additional liability.
END OF TERMS AND CONDITIONS
APPENDIX: How to apply the Apache License to your work.
To apply the Apache License to your work, attach the following
boilerplate notice, with the fields enclosed by brackets "{}"
replaced with your own identifying information. (Don't include
the brackets!) The text should be enclosed in the appropriate
comment syntax for the file format. We also recommend that a
file or class name and description of purpose be included on the
same "printed page" as the copyright notice for easier
identification within third-party archives.
Copyright {yyyy} {name of copyright owner}
Licensed under the Apache License, Version 2.0 (the "License");
you may not use this file except in compliance with the License.
You may obtain a copy of the License at
http://www.apache.org/licenses/LICENSE-2.0
Unless required by applicable law or agreed to in writing, software
distributed under the License is distributed on an "AS IS" BASIS,
WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
See the License for the specific language governing permissions and
limitations under the License.

View File

@ -1,106 +0,0 @@
# Put files/directories that should be ignored in this file when uploading
# to a chef-server or supermarket.
# Lines that start with '# ' are comments.
# OS generated files #
######################
.DS_Store
Icon?
nohup.out
ehthumbs.db
Thumbs.db
# SASS #
########
.sass-cache
# EDITORS #
###########
\#*
.#*
*~
*.sw[a-z]
*.bak
REVISION
TAGS*
tmtags
*_flymake.*
*_flymake
*.tmproj
.project
.settings
mkmf.log
## COMPILED ##
##############
a.out
*.o
*.pyc
*.so
*.com
*.class
*.dll
*.exe
*/rdoc/
# Testing #
###########
.watchr
.rspec
spec/*
spec/fixtures/*
test/*
features/*
examples/*
Guardfile
Procfile
.kitchen*
.rubocop.yml
spec/*
Rakefile
.travis.yml
.foodcritic
.codeclimate.yml
# SCM #
#######
.git
*/.git
.gitignore
.gitmodules
.gitconfig
.gitattributes
.svn
*/.bzr/*
*/.hg/*
*/.svn/*
# Berkshelf #
#############
Berksfile
Berksfile.lock
cookbooks/*
tmp
# Policyfile #
##############
Policyfile.rb
Policyfile.lock.json
# Cookbooks #
#############
CONTRIBUTING*
CHANGELOG*
TESTING*
# Strainer #
############
Colanderfile
Strainerfile
.colander
.strainer
# Vagrant #
###########
.vagrant
Vagrantfile

View File

@ -1,175 +0,0 @@
---
driver:
name: dokken
chef_version: latest
privileged: true
volumes: [
'/var/lib/docker', '/var/lib/docker-one', '/var/lib/docker-two'
]
transport:
name: dokken
provisioner:
name: dokken
deprecations_as_errors: true
verifier:
name: inspec
platforms:
- name: amazonlinux
driver:
image: dokken/amazonlinux
pid_one_command: /sbin/init
- name: amazonlinux-2
driver:
image: dokken/amazonlinux-2
pid_one_command: /usr/lib/systemd/systemd
- name: debian-8
driver:
image: dokken/debian-8
pid_one_command: /bin/systemd
- name: debian-9
driver:
image: dokken/debian-9
pid_one_command: /bin/systemd
- name: centos-7
driver:
image: dokken/centos-7
pid_one_command: /usr/lib/systemd/systemd
- name: fedora-28
driver:
image: dokken/fedora-28
pid_one_command: /usr/lib/systemd/systemd
- name: ubuntu-16.04
driver:
image: dokken/ubuntu-16.04
pid_one_command: /bin/systemd
- name: ubuntu-18.04
driver:
image: dokken/ubuntu-18.04
pid_one_command: /bin/systemd
suites:
###############################
# docker_installation resources
###############################
- name: installation_script_main
includes: [
'ubuntu-16.04',
'ubuntu-18.04'
]
attributes:
docker:
repo: 'main'
run_list:
- recipe[docker_test::installation_script]
- name: installation_script_test
includes: [
'ubuntu-16.04',
'ubuntu-18.04'
]
attributes:
docker:
repo: 'test'
run_list:
- recipe[docker_test::installation_script]
- name: installation_script_experimental
includes: [
'ubuntu-16.04',
'ubuntu-18.04'
]
attributes:
docker:
repo: 'experimental'
run_list:
- recipe[docker_test::installation_script]
- name: installation_package
attributes:
docker:
version: '18.06.0'
run_list:
- recipe[docker_test::installation_package]
- name: installation_tarball
attributes:
docker:
version: '18.06.0'
run_list:
- recipe[docker_test::installation_tarball]
includes: [
'ubuntu-16.04',
'ubuntu-18.04'
]
##################
# resource testing
##################
- name: resources
attributes:
docker:
version: '18.06.0'
run_list:
- recipe[docker_test::default]
- recipe[docker_test::image]
- recipe[docker_test::container]
- recipe[docker_test::exec]
- recipe[docker_test::plugin]
- name: network
includes: [
'ubuntu-16.04',
]
attributes:
docker:
version: '18.06.0'
run_list:
- recipe[docker_test::default]
- recipe[docker_test::network]
- name: volume
includes: [
'ubuntu-16.04',
]
attributes:
docker:
version: '18.06.0'
run_list:
- recipe[docker_test::default]
- recipe[docker_test::volume]
- name: registry
includes: [
'ubuntu-16.04',
]
attributes:
docker:
version: '18.06.0'
run_list:
- recipe[docker_test::default]
- recipe[docker_test::registry]
#############################
# quick service smoke testing
#############################
- name: smoke
includes: [
'ubuntu-16.04',
'ubuntu-18.04'
]
run_list:
- recipe[docker_test::smoke]

View File

@ -703,33 +703,33 @@ module DockerCookbook
if new_resource.detach == true &&
(
new_resource.attach_stderr == true ||
new_resource.attach_stdin == true ||
new_resource.attach_stdout == true ||
new_resource.stdin_once == true
new_resource.attach_stderr == true ||
new_resource.attach_stdin == true ||
new_resource.attach_stdout == true ||
new_resource.stdin_once == true
)
raise Chef::Exceptions::ValidationFailed, 'Conflicting options detach, attach_stderr, attach_stdin, attach_stdout, stdin_once.'
end
if new_resource.network_mode == 'host' &&
(
!(new_resource.hostname.nil? || new_resource.hostname.empty?) ||
!(new_resource.mac_address.nil? || new_resource.mac_address.empty?)
!(new_resource.hostname.nil? || new_resource.hostname.empty?) ||
!(new_resource.mac_address.nil? || new_resource.mac_address.empty?)
)
raise Chef::Exceptions::ValidationFailed, 'Cannot specify hostname or mac_address when network_mode is host.'
end
if new_resource.network_mode == 'container' &&
(
!(new_resource.hostname.nil? || new_resource.hostname.empty?) ||
!(new_resource.dns.nil? || new_resource.dns.empty?) ||
!(new_resource.dns_search.nil? || new_resource.dns_search.empty?) ||
!(new_resource.mac_address.nil? || new_resource.mac_address.empty?) ||
!(new_resource.extra_hosts.nil? || new_resource.extra_hosts.empty?) ||
!(new_resource.exposed_ports.nil? || new_resource.exposed_ports.empty?) ||
!(new_resource.port_bindings.nil? || new_resource.port_bindings.empty?) ||
!(new_resource.publish_all_ports.nil? || new_resource.publish_all_ports.empty?) ||
!new_resource.port.nil?
!(new_resource.hostname.nil? || new_resource.hostname.empty?) ||
!(new_resource.dns.nil? || new_resource.dns.empty?) ||
!(new_resource.dns_search.nil? || new_resource.dns_search.empty?) ||
!(new_resource.mac_address.nil? || new_resource.mac_address.empty?) ||
!(new_resource.extra_hosts.nil? || new_resource.extra_hosts.empty?) ||
!(new_resource.exposed_ports.nil? || new_resource.exposed_ports.empty?) ||
!(new_resource.port_bindings.nil? || new_resource.port_bindings.empty?) ||
!(new_resource.publish_all_ports.nil? || new_resource.publish_all_ports.empty?) ||
!new_resource.port.nil?
)
raise Chef::Exceptions::ValidationFailed, 'Cannot specify hostname, dns, dns_search, mac_address, extra_hosts, exposed_ports, port_bindings, publish_all_ports, port when network_mode is container.'
end

View File

@ -19,8 +19,8 @@ module DockerCookbook
action :create do
converge_by "creating volume #{new_resource.volume_name}" do
opts = {}
opts['Driver'] = driver if property_is_set?(:driver)
opts['DriverOpts'] = opts if property_is_set?(:opts)
opts['Driver'] = new_resource.driver if property_is_set?(:driver)
opts['DriverOpts'] = new_resource.opts if property_is_set?(:opts)
Docker::Volume.create(new_resource.volume_name, opts, connection)
end if current_resource.nil?
end

View File

@ -0,0 +1 @@
{"name":"docker","version":"4.9.3","description":"Provides docker_service, docker_image, and docker_container resources","long_description":"","maintainer":"Chef Software, Inc.","maintainer_email":"cookbooks@chef.io","license":"Apache-2.0","platforms":{"amazon":">= 0.0.0","centos":">= 0.0.0","scientific":">= 0.0.0","oracle":">= 0.0.0","debian":">= 0.0.0","fedora":">= 0.0.0","redhat":">= 0.0.0","ubuntu":">= 0.0.0"},"dependencies":{},"recommendations":{},"suggestions":{},"conflicting":{},"providing":{},"replacing":{},"attributes":{},"groupings":{},"recipes":{},"source_url":"https://github.com/chef-cookbooks/docker","issues_url":"https://github.com/chef-cookbooks/docker/issues","gems":[["docker-api","~> 1.34.0"]],"chef_version":[[">= 12.15"]],"ohai_version":[]}

View File

@ -3,7 +3,7 @@ maintainer 'Chef Software, Inc.'
maintainer_email 'cookbooks@chef.io'
license 'Apache-2.0'
description 'Provides docker_service, docker_image, and docker_container resources'
version '4.9.2'
version '4.9.3'
source_url 'https://github.com/chef-cookbooks/docker'
issues_url 'https://github.com/chef-cookbooks/docker/issues'

View File

@ -1,925 +0,0 @@
require 'spec_helper'
describe 'docker_test::container' do
cached(:chef_run) { ChefSpec::SoloRunner.new(platform: 'ubuntu', version: '16.04').converge(described_recipe) }
before do
stub_command("[ ! -z `docker ps -qaf 'name=busybox_ls$'` ]").and_return(false)
stub_command("[ ! -z `docker ps -qaf 'name=bill$'` ]").and_return(false)
stub_command("[ ! -z `docker ps -qaf 'name=hammer_time$'` ]").and_return(false)
stub_command('docker ps -a | grep red_light | grep Exited').and_return(true)
stub_command("[ ! -z `docker ps -qaf 'name=red_light$'` ]").and_return(false)
stub_command("[ ! -z `docker ps -qaf 'name=green_light$'` ]").and_return(false)
stub_command("[ ! -z `docker ps -qaf 'name=quitter$'` ]").and_return(false)
stub_command("[ ! -z `docker ps -qaf 'name=restarter$'` ]").and_return(false)
stub_command("[ ! -z `docker ps -qaf 'name=uber_options$'` ]").and_return(false)
stub_command("[ ! -z `docker ps -qaf 'name=kill_after$'` ]").and_return(false)
stub_command("[ ! -z `docker ps -qaf 'name=change_network_mode$'` ]").and_return(false)
stub_command('docker images | grep cmd_change').and_return(false)
stub_command('docker ps -a | grep cmd_change$').and_return(false)
end
context 'testing create action' do
it 'create docker_container[hello-world]' do
expect(chef_run).to create_docker_container('hello-world').with(
api_retries: 3,
read_timeout: 60,
container_name: 'hello-world',
repo: 'hello-world',
tag: 'latest',
command: ['/hello'],
cgroup_parent: '',
cpu_shares: 0,
cpuset_cpus: '',
detach: true,
domain_name: '',
log_driver: 'json-file',
memory: 0,
memory_swap: 0,
network_disabled: false,
outfile: nil,
restart_maximum_retry_count: 0,
restart_policy: nil,
security_opt: nil,
signal: 'SIGTERM',
user: ''
)
end
end
context 'testing run action' do
it 'run docker_container[hello-world]' do
expect(chef_run).to run_docker_container('busybox_ls').with(
repo: 'busybox',
command: ['ls', '-la', '/']
)
end
it 'run_if_missing docker_container[alpine_ls]' do
expect(chef_run).to run_if_missing_docker_container('alpine_ls').with(
repo: 'alpine',
tag: '3.1',
command: ['ls', '-la', '/']
)
end
end
context 'testing ports property' do
it 'run docker_container[an_echo_server]' do
expect(chef_run).to run_docker_container('an_echo_server').with(
repo: 'alpine',
tag: '3.1',
command: ['nc', '-ll', '-p', '7', '-e', '/bin/cat'],
port: '7:7'
)
end
it 'run docker_container[another_echo_server]' do
expect(chef_run).to run_docker_container('another_echo_server').with(
repo: 'alpine',
tag: '3.1',
command: ['nc', '-ll', '-p', '7', '-e', '/bin/cat'],
port: '7'
)
end
it 'run docker_container[an_udp_echo_server]' do
expect(chef_run).to run_docker_container('an_udp_echo_server').with(
repo: 'alpine',
tag: '3.1',
command: ['nc', '-ul', '-p', '7', '-e', '/bin/cat'],
port: '5007:7/udp'
)
end
it 'run docker_container[multi_ip_port]' do
expect(chef_run).to run_docker_container('multi_ip_port').with(
repo: 'alpine',
tag: '3.1',
command: ['nc', '-ul', '-p', '7', '-e', '/bin/cat'],
port: ['8301', '8301:8301/udp', '127.0.0.1:8500:8500', '127.0.1.1:8500:8500']
)
end
it 'run docker_container[port_range]' do
expect(chef_run).to run_docker_container('port_range').with(
repo: 'alpine',
tag: '3.1',
command: ['sh', '-c', 'trap exit 0 SIGTERM; while :; do sleep 1; done'],
port: ['2000-2001', '2000-2001/udp', '3000-3001/tcp', '7000-7002:8000-8002']
)
end
end
context 'testing action :kill' do
it 'run execute[bill]' do
expect(chef_run).to run_execute('bill').with(
command: 'docker run --name bill -d busybox sh -c "trap exit 0 SIGTERM; while :; do sleep 1; done"'
)
end
it 'kill docker_container[bill]' do
expect(chef_run).to kill_docker_container('bill')
end
end
context 'testing action :stop' do
it 'run execute[hammer_time]' do
expect(chef_run).to run_execute('hammer_time').with(
command: 'docker run --name hammer_time -d busybox sh -c "trap exit 0 SIGTERM; while :; do sleep 1; done"'
)
end
it 'stop docker_container[hammer_time]' do
expect(chef_run).to stop_docker_container('hammer_time')
end
end
context 'testing action :pause' do
it 'run execute[rm stale red_light]' do
expect(chef_run).to run_execute('rm stale red_light').with(
command: 'docker rm -f red_light'
)
end
it 'run execute[red_light]' do
expect(chef_run).to run_execute('red_light').with(
command: 'docker run --name red_light -d busybox sh -c "trap exit 0 SIGTERM; while :; do sleep 1; done"'
)
end
it 'pause docker_container[red_light]' do
expect(chef_run).to pause_docker_container('red_light')
end
end
context 'testing action :unpause' do
it 'run bash[green_light]' do
expect(chef_run).to run_bash('green_light')
end
it 'unpause docker_container[green_light]' do
expect(chef_run).to unpause_docker_container('green_light')
end
end
context 'testing action :restart' do
it 'run bash[quitter]' do
expect(chef_run).to run_bash('quitter')
end
it 'restart docker_container[quitter]' do
expect(chef_run).to restart_docker_container('quitter')
end
it 'create file[/marker_container_quitter_restarter]' do
expect(chef_run).to create_file('/marker_container_quitter_restarter')
end
it 'run execute[restarter]' do
expect(chef_run).to run_execute('restarter').with(
command: 'docker run --name restarter -d busybox sh -c "trap exit 0 SIGTERM; while :; do sleep 1; done"'
)
end
it 'restart docker_container[restarter]' do
expect(chef_run).to restart_docker_container('restarter')
end
it 'create file[/marker_container_restarter]' do
expect(chef_run).to create_file('/marker_container_restarter')
end
end
context 'testing action :delete' do
it 'run execute[deleteme]' do
expect(chef_run).to run_execute('deleteme').with(
command: 'docker run --name deleteme -d busybox sh -c "trap exit 0 SIGTERM; while :; do sleep 1; done"'
)
end
it 'create file[/marker_container_deleteme' do
expect(chef_run).to create_file('/marker_container_deleteme')
end
it 'delete docker_container[deleteme]' do
expect(chef_run).to delete_docker_container('deleteme')
end
end
context 'testing action :redeploy' do
it 'runs docker_container[redeployer]' do
expect(chef_run).to run_docker_container('redeployer').with(
repo: 'alpine',
tag: '3.1',
command: ['sh', '-c', 'trap exit 0 SIGTERM; while :; do sleep 1; done'],
port: '7'
)
end
it 'creates docker_container[unstarted_redeployer]' do
expect(chef_run).to create_docker_container('unstarted_redeployer').with(
repo: 'alpine',
tag: '3.1',
command: ['sh', '-c', 'trap exit 0 SIGTERM; while :; do sleep 1; done'],
port: '7'
)
end
it 'runs execute[redeploy redeployers]' do
expect(chef_run).to run_execute('redeploy redeployers')
end
end
context 'testing bind_mounter' do
it 'creates directory[/hostbits]' do
expect(chef_run).to create_directory('/hostbits').with(
owner: 'root',
group: 'root',
mode: '0755'
)
end
it 'creates file[/hostbits/hello.txt]' do
expect(chef_run).to create_file('/hostbits/hello.txt').with(
content: 'hello there\n',
owner: 'root',
group: 'root',
mode: '0644'
)
end
it 'creates directory[/more-hostbits]' do
expect(chef_run).to create_directory('/more-hostbits').with(
owner: 'root',
group: 'root',
mode: '0755'
)
end
it 'creates file[/more-hostbits/hello.txt]' do
expect(chef_run).to create_file('/more-hostbits/hello.txt').with(
content: 'hello there\n',
owner: 'root',
group: 'root',
mode: '0644'
)
end
it 'run_if_missing docker_container[bind_mounter]' do
expect(chef_run).to run_if_missing_docker_container('bind_mounter').with(
repo: 'busybox',
command: ['ls', '-la', '/bits', '/more-bits'],
volumes_binds: ['/hostbits:/bits', '/more-hostbits:/more-bits', '/winter:/spring:ro'],
volumes: { '/snow' => {}, '/summer' => {} }
)
end
end
context 'testing binds_alias' do
it 'run_if_missing docker_container[binds_alias]' do
expect(chef_run).to run_if_missing_docker_container('binds_alias').with(
repo: 'busybox',
command: ['ls', '-la', '/bits', '/more-bits'],
volumes_binds: ['/fall:/sun', '/winter:/spring:ro'],
volumes: { '/snow' => {}, '/summer' => {} }
)
end
end
context 'testing volumes_from' do
it 'creates directory[/chefbuilder]' do
expect(chef_run).to create_directory('/chefbuilder').with(
owner: 'root',
group: 'root'
)
end
it 'runs execute[copy chef to chefbuilder]' do
expect(chef_run).to run_execute('copy chef to chefbuilder').with(
command: 'tar cf - /opt/chef | tar xf - -C /chefbuilder',
creates: '/chefbuilder/opt'
)
end
it 'creates file[/chefbuilder/Dockerfile]' do
expect(chef_run).to create_file('/chefbuilder/Dockerfile')
end
it 'build_if_missing docker_image[chef_container]' do
expect(chef_run).to build_if_missing_docker_image('chef_container').with(
tag: 'latest',
source: '/chefbuilder'
)
end
it 'create docker_container[chef_container]' do
expect(chef_run).to create_docker_container('chef_container').with(
command: ['true'],
volumes: { '/opt/chef' => {} }
)
end
it 'run_if_missing docker_container[ohai_debian]' do
expect(chef_run).to run_if_missing_docker_container('ohai_debian').with(
command: ['/opt/chef/embedded/bin/ohai', 'platform'],
repo: 'debian',
volumes_from: ['chef_container']
)
end
end
context 'testing env' do
it 'run_if_missing docker_container[env]' do
expect(chef_run).to run_if_missing_docker_container('env').with(
repo: 'debian',
env: ['PATH=/usr/bin', 'FOO=bar'],
command: ['env']
)
end
end
context 'testing entrypoint' do
it 'run_if_missing docker_container[ohai_again]' do
expect(chef_run).to run_if_missing_docker_container('ohai_again').with(
repo: 'debian',
volumes_from: ['chef_container'],
entrypoint: ['/opt/chef/embedded/bin/ohai']
)
end
it 'run_if_missing docker_container[ohai_again_debian]' do
expect(chef_run).to run_if_missing_docker_container('ohai_again_debian').with(
repo: 'debian',
volumes_from: ['chef_container'],
entrypoint: ['/opt/chef/embedded/bin/ohai'],
command: ['platform']
)
end
end
context 'testing Dockefile CMD directive' do
it 'creates directory[/cmd_test]' do
expect(chef_run).to create_directory('/cmd_test')
end
it 'creates file[/cmd_test/Dockerfile]' do
expect(chef_run).to create_file('/cmd_test/Dockerfile')
end
it 'build_if_missing docker_image[cmd_test]' do
expect(chef_run).to build_if_missing_docker_image('cmd_test').with(
tag: 'latest',
source: '/cmd_test'
)
end
it 'run_if_missing docker_container[cmd_test]' do
expect(chef_run).to run_if_missing_docker_container('cmd_test')
end
end
context 'testing autoremove' do
it 'runs docker_container[sean_was_here]' do
expect(chef_run).to run_docker_container('sean_was_here').with(
repo: 'debian',
volumes_from: ['chef_container'],
autoremove: true
)
end
it 'creates file[/marker_container_sean_was_here]' do
expect(chef_run).to create_file('/marker_container_sean_was_here')
end
end
context 'testing detach' do
it 'runs docker_container[attached]' do
expect(chef_run).to run_docker_container('attached').with(
repo: 'debian',
volumes_from: ['chef_container'],
detach: false
)
end
it 'creates file[/marker_container_attached]' do
expect(chef_run).to create_file('/marker_container_attached')
end
context 'with timeout' do
it 'runs docker_container[attached_with_timeout]' do
expect(chef_run).to run_docker_container('attached_with_timeout').with(
repo: 'debian',
volumes_from: ['chef_container'],
detach: false,
timeout: 10
)
end
it 'creates file[/marker_container_attached_with_timeout]' do
expect(chef_run).to create_file('/marker_container_attached_with_timeout')
end
end
end
context 'testing cap_add' do
it 'run_if_missing docker_container[cap_add_net_admin]' do
expect(chef_run).to run_if_missing_docker_container('cap_add_net_admin').with(
repo: 'debian',
command: ['bash', '-c', 'ip addr add 10.9.8.7/24 brd + dev eth0 label eth0:0 ; ip addr list'],
cap_add: ['NET_ADMIN']
)
end
it 'run_if_missing docker_container[cap_add_net_admin_error]' do
expect(chef_run).to run_if_missing_docker_container('cap_add_net_admin_error').with(
repo: 'debian',
command: ['bash', '-c', 'ip addr add 10.9.8.7/24 brd + dev eth0 label eth0:0 ; ip addr list']
)
end
end
context 'testing cap_drop' do
it 'run_if_missing docker_container[cap_drop_mknod]' do
expect(chef_run).to run_if_missing_docker_container('cap_drop_mknod').with(
repo: 'debian',
command: ['bash', '-c', 'mknod -m 444 /dev/urandom2 c 1 9 ; ls -la /dev/urandom2'],
cap_drop: ['MKNOD']
)
end
it 'run_if_missing docker_container[cap_drop_mknod_error]' do
expect(chef_run).to run_if_missing_docker_container('cap_drop_mknod_error').with(
repo: 'debian',
command: ['bash', '-c', 'mknod -m 444 /dev/urandom2 c 1 9 ; ls -la /dev/urandom2']
)
end
end
context 'testing hostname and domain_name' do
it 'run_if_missing docker_container[fqdn]' do
expect(chef_run).to run_if_missing_docker_container('fqdn').with(
repo: 'debian',
command: ['hostname', '-f'],
hostname: 'computers',
domain_name: 'biz'
)
end
end
context 'testing dns' do
it 'run_if_missing docker_container[dns]' do
expect(chef_run).to run_if_missing_docker_container('dns').with(
repo: 'debian',
command: ['cat', '/etc/resolv.conf'],
hostname: 'computers',
dns: ['4.3.2.1', '1.2.3.4'],
dns_search: ['computers.biz', 'chef.io']
)
end
end
context 'testing extra_hosts' do
it 'run_if_missing docker_container[extra_hosts]' do
expect(chef_run).to run_if_missing_docker_container('extra_hosts').with(
repo: 'debian',
command: ['cat', '/etc/hosts'],
extra_hosts: ['east:4.3.2.1', 'west:1.2.3.4']
)
end
end
context 'testing cpu_shares' do
it 'run_if_missing docker_container[cpu_shares]' do
expect(chef_run).to run_if_missing_docker_container('cpu_shares').with(
repo: 'alpine',
tag: '3.1',
command: ['ls', '-la'],
cpu_shares: 512
)
end
end
context 'testing cpuset_cpus' do
it 'run_if_missing docker_container[cpuset_cpus]' do
expect(chef_run).to run_if_missing_docker_container('cpuset_cpus').with(
repo: 'alpine',
tag: '3.1',
command: ['ls', '-la'],
cpuset_cpus: '0,1'
)
end
end
context 'testing restart_policy' do
it 'run_if_missing docker_container[try_try_again]' do
expect(chef_run).to run_if_missing_docker_container('try_try_again').with(
repo: 'alpine',
tag: '3.1',
command: ['grep', 'asdasdasd', '/etc/passwd'],
restart_policy: 'on-failure',
restart_maximum_retry_count: 2
)
end
it 'run_if_missing docker_container[reboot_survivor]' do
expect(chef_run).to run_if_missing_docker_container('reboot_survivor').with(
repo: 'alpine',
tag: '3.1',
command: ['nc', '-ll', '-p', '123', '-e', '/bin/cat'],
port: '123',
restart_policy: 'always'
)
end
it 'run_if_missing docker_container[reboot_survivor_retry]' do
expect(chef_run).to run_if_missing_docker_container('reboot_survivor_retry').with(
repo: 'alpine',
tag: '3.1',
command: ['nc', '-ll', '-p', '123', '-e', '/bin/cat'],
port: '123',
restart_policy: nil,
restart_maximum_retry_count: 2
)
end
end
context 'testing links' do
it 'runs docker_container[link_source]' do
expect(chef_run).to run_docker_container('link_source').with(
repo: 'alpine',
tag: '3.1',
env: ['FOO=bar', 'BIZ=baz'],
command: ['sh', '-c', 'trap exit 0 SIGTERM; while :; do sleep 1; done'],
port: '321'
)
end
it 'runs docker_container[link_source_2]' do
expect(chef_run).to run_docker_container('link_source_2').with(
repo: 'alpine',
tag: '3.1',
env: ['FOO=few', 'BIZ=buzz'],
command: ['sh', '-c', 'trap exit 0 SIGTERM; while :; do sleep 1; done'],
port: '322'
)
end
it 'run_if_missing docker_container[link_target_1]' do
expect(chef_run).to run_if_missing_docker_container('link_target_1').with(
repo: 'alpine',
tag: '3.1',
env: ['ASD=asd'],
command: ['ping', '-c', '1', 'hello'],
links: ['link_source:hello']
)
end
it 'run_if_missing docker_container[link_target_2]' do
expect(chef_run).to run_if_missing_docker_container('link_target_2').with(
repo: 'alpine',
tag: '3.1',
command: ['env'],
links: ['link_source:hello']
)
end
it 'run_if_missing docker_container[link_target_3]' do
expect(chef_run).to run_if_missing_docker_container('link_target_3').with(
repo: 'alpine',
tag: '3.1',
env: ['ASD=asd'],
command: ['ping', '-c', '1', 'hello_again'],
links: ['link_source:hello', 'link_source_2:hello_again']
)
end
it 'run_if_missing docker_container[link_target_4]' do
expect(chef_run).to run_if_missing_docker_container('link_target_4').with(
repo: 'alpine',
tag: '3.1',
command: ['env'],
links: ['link_source:hello', 'link_source_2:hello_again']
)
end
it 'runs execute[redeploy_link_source]' do
expect(chef_run).to run_execute('redeploy_link_source')
end
end
context 'testing link removal' do
it 'run_if_missing docker_container[another_link_source]' do
expect(chef_run).to run_if_missing_docker_container('another_link_source').with(
repo: 'alpine',
tag: '3.1',
command: ['nc', '-ll', '-p', '456', '-e', '/bin/cat'],
port: '456'
)
end
it 'run_if_missing docker_container[another_link_target]' do
expect(chef_run).to run_if_missing_docker_container('another_link_target').with(
repo: 'alpine',
tag: '3.1',
command: ['ping', '-c', '1', 'hello'],
links: ['another_link_source:derp']
)
end
end
context 'testing volume removal' do
it 'creates directory[/dangler]' do
expect(chef_run).to create_directory('/dangler').with(
owner: 'root',
group: 'root',
mode: '0755'
)
end
it 'creates file[/dangler/Dockerfile]' do
expect(chef_run).to create_file('/dangler/Dockerfile')
end
it 'build_if_missing docker_image[dangler]' do
expect(chef_run).to build_if_missing_docker_image('dangler').with(
tag: 'latest',
source: '/dangler'
)
end
it 'creates docker_container[dangler]' do
expect(chef_run).to create_docker_container('dangler').with(
command: ['true']
)
end
it 'creates file[/marker_container_dangler]' do
expect(chef_run).to create_file('/marker_container_dangler')
end
it 'deletes docker_container[dangler_volume_remover]' do
expect(chef_run).to delete_docker_container('dangler_volume_remover').with(
container_name: 'dangler',
remove_volumes: true
)
end
end
context 'testing mutator' do
it 'tags docker_tag[mutator_from_busybox]' do
expect(chef_run).to tag_docker_tag('mutator_from_busybox').with(
target_repo: 'busybox',
target_tag: 'latest',
to_repo: 'someara/mutator',
to_tag: 'latest'
)
end
it 'run_if_missing docker_container[mutator]' do
expect(chef_run).to run_if_missing_docker_container('mutator').with(
repo: 'someara/mutator',
tag: 'latest',
command: ['sh', '-c', 'touch /mutator-`date +"%Y-%m-%d_%H-%M-%S"`'],
outfile: '/mutator.tar',
force: true
)
end
it 'runs execute[commit mutator]' do
expect(chef_run).to run_execute('commit mutator')
end
end
context 'testing network_mode' do
it 'runs docker_container[network_mode]' do
expect(chef_run).to run_docker_container('network_mode').with(
repo: 'alpine',
tag: '3.1',
command: ['nc', '-ll', '-p', '776', '-e', '/bin/cat'],
port: '776:776',
network_mode: 'host'
)
end
end
it 'runs execute[change_network_mode]' do
expect(chef_run).to run_execute('change_network_mode')
end
it 'runs docker_container[change_network_mode]' do
expect(chef_run).to run_docker_container('change_network_mode')
end
context 'testing ulimits' do
it 'runs docker_container[ulimits]' do
expect(chef_run).to run_docker_container('ulimits').with(
repo: 'alpine',
tag: '3.1',
command: ['sh', '-c', 'trap exit 0 SIGTERM; while :; do sleep 1; done'],
port: '778:778',
cap_add: ['SYS_RESOURCE'],
ulimits: [
'nofile=40960:40960',
'core=100000000:100000000',
'memlock=100000000:100000000',
]
)
end
end
context 'testing api_timeouts' do
it 'run_if_missing docker_container[api_timeouts]' do
expect(chef_run).to run_if_missing_docker_container('api_timeouts').with(
command: ['nc', '-ll', '-p', '779', '-e', '/bin/cat'],
repo: 'alpine',
tag: '3.1',
read_timeout: 60,
write_timeout: 60
)
end
end
context 'testing uber_options' do
it 'runs execute[uber_options]' do
expect(chef_run).to run_execute('uber_options').with(
command: 'docker run --name uber_options -d busybox sh -c "trap exit 0 SIGTERM; while :; do sleep 1; done"'
)
end
it 'runs docker_container[uber_options]' do
expect(chef_run).to run_docker_container('uber_options').with(
repo: 'alpine',
tag: '3.1',
hostname: 'www',
domainname: 'computers.biz',
env: ['FOO=foo', 'BAR=bar'],
mac_address: '00:00:DE:AD:BE:EF',
network_disabled: false,
tty: true,
volumes_binds: ['/hostbits:/bits', '/more-hostbits:/more-bits'],
volumes: { '/root' => {} },
working_dir: '/',
cap_add: %w(NET_ADMIN SYS_RESOURCE),
cap_drop: ['MKNOD'],
cpu_shares: 512,
cpuset_cpus: '0,1',
dns: ['8.8.8.8', '8.8.4.4'],
dns_search: ['computers.biz'],
extra_hosts: ['east:4.3.2.1', 'west:1.2.3.4'],
links: ['link_source:hello'],
port: '1234:1234',
volumes_from: ['chef_container'],
user: 'operator',
entrypoint: ['/bin/sh', '-c'],
command: ['trap exit 0 SIGTERM; while :; do sleep 5; done'],
ulimits: [
'nofile=40960:40960',
'core=100000000:100000000',
'memlock=100000000:100000000',
],
labels: { 'foo' => 'bar', 'hello' => 'world' }
)
end
end
context 'testing overrides' do
it 'creates directory[/overrides]' do
expect(chef_run).to create_directory('/overrides').with(
owner: 'root',
group: 'root'
)
end
it 'creates file[/overrides/Dockerfile]' do
expect(chef_run).to create_file('/overrides/Dockerfile')
end
it 'build_if_missing docker_image[overrides]' do
expect(chef_run).to build_if_missing_docker_image('overrides').with(
tag: 'latest',
source: '/overrides',
force: true
)
end
it 'run_if_missing docker_container[overrides-1]' do
expect(chef_run).to run_docker_container('overrides-1').with(
repo: 'overrides'
)
end
it 'run_if_missing docker_container[overrides-2]' do
expect(chef_run).to run_docker_container('overrides-2').with(
repo: 'overrides',
user: 'operator',
command: ['sh', '-c', 'trap exit 0 SIGTERM; while :; do sleep 1; done'],
env: ['FOO=biz'],
volume: { '/var/log' => {} },
workdir: '/tmp'
)
end
end
context 'testing host overrides' do
it 'creates docker_container[host_override]' do
expect(chef_run).to create_docker_container('host_override').with(
repo: 'alpine',
host: 'tcp://127.0.0.1:2376',
command: ['ls', '-la', '/']
)
end
end
context 'testing logging drivers' do
it 'run_if_missing docker_container[syslogger]' do
expect(chef_run).to run_if_missing_docker_container('syslogger').with(
command: ['nc', '-ll', '-p', '780', '-e', '/bin/cat'],
repo: 'alpine',
tag: '3.1',
log_driver: 'syslog',
log_opts: { 'tag' => 'container-syslogger' }
)
end
end
context 'testing kill_after' do
it 'creates directory[/kill_after]' do
expect(chef_run).to create_directory('/kill_after').with(
owner: 'root',
group: 'root'
)
end
it 'creates file[/kill_after/loop.sh]' do
expect(chef_run).to create_file('/kill_after/loop.sh')
end
it 'creates file[/kill_after/Dockerfile]' do
expect(chef_run).to create_file('/kill_after/Dockerfile')
end
it 'build_if_missing docker_image[kill_after]' do
expect(chef_run).to build_if_missing_docker_image('kill_after').with(
tag: 'latest',
source: '/kill_after',
force: true
)
end
it 'run execute[kill_after]' do
expect(chef_run).to run_execute('kill_after').with(
command: 'docker run --name kill_after -d kill_after'
)
end
it 'stop docker_container[kill_after]' do
expect(chef_run).to stop_docker_container('kill_after')
end
it 'run_if_missing docker_container[pid_mode]' do
expect(chef_run).to run_if_missing_docker_container('pid_mode').with(
pid_mode: 'host'
)
end
it 'run_if_missing docker_container[ipc_mode]' do
expect(chef_run).to run_if_missing_docker_container('ipc_mode').with(
ipc_mode: 'host'
)
end
it 'run_if_missing docker_container[uts_mode]' do
expect(chef_run).to run_if_missing_docker_container('uts_mode').with(
uts_mode: 'host'
)
end
end
context 'testing ro_rootfs' do
it 'creates read-only rootfs' do
expect(chef_run).to run_if_missing_docker_container('ro_rootfs').with(
ro_rootfs: true
)
end
end
context 'testing health_check options' do
it 'sets health_check options' do
expect(chef_run).to run_docker_container('health_check').with(
repo: 'alpine',
tag: '3.1',
health_check: {
'Test' =>
[
'string',
],
'Interval' => 0,
'Timeout' => 0,
'Retries' => 0,
'StartPeriod' => 0,
}
)
end
end
end

View File

@ -1,41 +0,0 @@
require 'spec_helper'
describe 'docker_test::exec' do
cached(:chef_run) { ChefSpec::SoloRunner.new(platform: 'ubuntu', version: '16.04').converge(described_recipe) }
it 'pull_if_missing docker_image[busybox]' do
expect(chef_run).to pull_if_missing_docker_image('busybox')
end
it 'run docker_container[busybox_exec]' do
expect(chef_run).to run_docker_container('busybox_exec').with(
repo: 'busybox',
command: ['sh', '-c', 'trap exit 0 SIGTERM; while :; do sleep 1; done']
)
end
context 'testing run action' do
it 'run docker_exec[touch_it]' do
expect(chef_run).to run_docker_exec('touch_it').with(
container: 'busybox_exec',
command: ['touch', '/tmp/onefile'],
timeout: 120
)
end
it 'creates file[/marker_busybox_exec_onefile]' do
expect(chef_run).to create_file('/marker_busybox_exec_onefile')
end
it 'run docker_exec[another]' do
expect(chef_run).to run_docker_exec('poke_it').with(
container: 'busybox_exec',
command: ['touch', '/tmp/twofile']
)
end
it 'creates file[/marker_busybox_exec_twofile]' do
expect(chef_run).to create_file('/marker_busybox_exec_twofile')
end
end
end

View File

@ -1,24 +0,0 @@
require 'spec_helper'
describe 'docker_test::image_prune' do
context 'it steps over the provider' do
cached(:chef_run) { ChefSpec::SoloRunner.new(platform: 'ubuntu', version: '18.04').converge(described_recipe) }
context 'testing default action, default properties' do
it 'prunes docker_image[hello-world]' do
expect(chef_run).to prune_docker_image_prune('hello-world').with(
dangling: true
)
end
it 'prunes docker_image[hello-world]' do
expect(chef_run).to prune_docker_image_prune('prune-old-images').with(
dangling: true,
prune_until: '1h30m',
with_label: 'com.example.vendor=ACME',
without_label: 'no_prune'
)
end
end
end
end

View File

@ -1,271 +0,0 @@
require 'spec_helper'
describe 'docker_test::image' do
cached(:chef_run) { ChefSpec::SoloRunner.new(platform: 'ubuntu', version: '16.04').converge(described_recipe) }
before do
stub_command('/usr/bin/test -f /tmp/registry/tls/ca-key.pem').and_return(true)
stub_command('/usr/bin/test -f /tmp/registry/tls/server-key.pem').and_return(true)
stub_command('/usr/bin/test -f /tmp/registry/tls/server.csr').and_return(true)
stub_command('/usr/bin/test -f /tmp/registry/tls/server.pem').and_return(true)
stub_command('/usr/bin/test -f /tmp/registry/tls/key.pem').and_return(true)
stub_command('/usr/bin/test -f /tmp/registry/tls/client.csr').and_return(true)
stub_command('/usr/bin/test -f /tmp/registry/tls/cert.pem').and_return(true)
stub_command("[ ! -z `docker ps -qaf 'name=registry_service$'` ]").and_return(true)
stub_command("[ ! -z `docker ps -qaf 'name=registry_proxy$'` ]").and_return(true)
stub_command('netstat -plnt | grep ":5000" && netstat -plnt | grep ":5043"').and_return(false)
end
context 'testing default action, default properties' do
it 'pulls docker_image[hello-world]' do
expect(chef_run).to pull_docker_image('hello-world').with(
api_retries: 3,
destination: nil,
force: false,
nocache: false,
noprune: false,
read_timeout: 120,
repo: 'hello-world',
rm: true,
source: nil,
tag: 'latest',
write_timeout: nil
)
end
end
context 'testing non-default name attribute containing a single quote' do
it "pulls docker_image[Tom's container]" do
expect(chef_run).to pull_docker_image("Tom's container").with(
repo: 'tduffield/testcontainerd'
)
end
end
context 'testing the :pull action' do
it 'pulls docker_image[busybox]' do
expect(chef_run).to pull_docker_image('busybox')
end
end
context 'testing using pull_if_missing' do
it 'pull_if_missing docker_image[debian]' do
expect(chef_run).to pull_if_missing_docker_image('debian')
end
end
context 'testing specifying a tag and read/write timeouts' do
it 'pulls docker_image[alpine]' do
expect(chef_run).to pull_docker_image('alpine').with(
tag: '3.1',
read_timeout: 60,
write_timeout: 60
)
end
end
context 'testing the host property' do
it 'pulls docker_image[alpine-localhost]' do
expect(chef_run).to pull_docker_image('alpine-localhost').with(
repo: 'alpine',
tag: '2.7',
host: 'tcp://127.0.0.1:2376'
)
end
end
context 'testing :remove action' do
it 'runs execute[pull vbatts/slackware]' do
expect(chef_run).to run_execute('pull vbatts/slackware').with(
command: 'docker pull vbatts/slackware ; touch /marker_image_slackware',
creates: '/marker_image_slackware'
)
end
it 'removes docker_image[vbatts/slackware]' do
expect(chef_run).to remove_docker_image('vbatts/slackware')
end
end
context 'testing :save action' do
it 'saves docker_image[save hello-world]' do
expect(chef_run).to save_docker_image('save hello-world').with(
repo: 'hello-world',
destination: '/hello-world.tar'
)
end
end
context 'testing :load action' do
it 'pulls docker_image[cirros]' do
expect(chef_run).to pull_docker_image('cirros')
end
it 'saves docker_image[save cirros]' do
expect(chef_run).to save_docker_image('save cirros').with(
destination: '/cirros.tar'
)
end
it 'removes docker_image[remove cirros]' do
expect(chef_run).to remove_docker_image('remove cirros').with(
repo: 'cirros'
)
end
it 'loads docker_image[load cirros]' do
expect(chef_run).to load_docker_image('load cirros').with(
source: '/cirros.tar'
)
end
it 'creates file[/marker_image_image-1]' do
expect(chef_run).to create_file('/marker_load_cirros-1')
end
end
context 'testing the :build action from Dockerfile' do
it 'creates directory[/usr/local/src/container1]' do
expect(chef_run).to create_directory('/usr/local/src/container1')
end
it 'creates cookbook_file[/usr/local/src/container1/Dockerfile]' do
expect(chef_run).to create_cookbook_file('/usr/local/src/container1/Dockerfile').with(
source: 'Dockerfile_1'
)
end
it 'build docker_image[someara/image-1]' do
expect(chef_run).to build_docker_image('someara/image-1').with(
tag: 'v0.1.0',
source: '/usr/local/src/container1/Dockerfile',
force: true
)
end
it 'creates file[/marker_image_image-1]' do
expect(chef_run).to create_file('/marker_image_image-1')
end
end
context 'testing the :build action from directory' do
it 'creates directory[/usr/local/src/container2]' do
expect(chef_run).to create_directory('/usr/local/src/container2')
end
it 'creates file[/usr/local/src/container2/foo.txt]' do
expect(chef_run).to create_file('/usr/local/src/container2/foo.txt').with(
content: 'Dockerfile_2 contains ADD for this file'
)
end
it 'creates cookbook_file[/usr/local/src/container2/Dockerfile]' do
expect(chef_run).to create_cookbook_file('/usr/local/src/container2/Dockerfile').with(
source: 'Dockerfile_2'
)
end
it 'build_if_missing docker_image[someara/image.2]' do
expect(chef_run).to build_if_missing_docker_image('someara/image.2').with(
tag: 'v0.1.0',
source: '/usr/local/src/container2'
)
end
end
context 'testing the :build action from a tarball' do
it 'creates cookbook_file[/usr/local/src/image_3.tar]' do
expect(chef_run).to create_cookbook_file('/usr/local/src/image_3.tar').with(
source: 'image_3.tar'
)
end
it 'build_if_missing docker_image[image_3]' do
expect(chef_run).to build_if_missing_docker_image('image_3').with(
tag: 'v0.1.0',
source: '/usr/local/src/image_3.tar'
)
end
end
context 'testing the :import action' do
it 'imports docker_image[hello-again]' do
expect(chef_run).to import_docker_image('hello-again').with(
tag: 'v0.1.0',
source: '/hello-world.tar'
)
end
end
context 'testing images with dots and dashes in the name' do
it 'pulls docker_image[someara/name-w-dashes]' do
expect(chef_run).to pull_docker_image('someara/name-w-dashes')
end
it 'pulls docker_image[someara/name.w.dots]' do
expect(chef_run).to pull_docker_image('someara/name.w.dots')
end
end
context 'when setting up a local registry' do
it 'includes the "docker_test::registry" recipe' do
expect(chef_run).to include_recipe('docker_test::registry')
end
end
context 'testing pushing to a private registry' do
it 'tags docker_tag[private repo tag for name-w-dashes:v1.0.1]' do
expect(chef_run).to tag_docker_tag('private repo tag for name-w-dashes:v1.0.1').with(
target_repo: 'hello-again',
target_tag: 'v0.1.0',
to_repo: 'localhost:5043/someara/name-w-dashes',
to_tag: 'latest'
)
end
it 'tags docker_tag[private repo tag for name.w.dots]' do
expect(chef_run).to tag_docker_tag('private repo tag for name.w.dots').with(
target_repo: 'busybox',
target_tag: 'latest',
to_repo: 'localhost:5043/someara/name.w.dots',
to_tag: 'latest'
)
end
it 'pushes docker_image[localhost:5043/someara/name-w-dashes]' do
expect(chef_run).to push_docker_image('localhost:5043/someara/name-w-dashes')
end
it 'creates file[/marker_image_private_name-w-dashes]' do
expect(chef_run).to create_file('/marker_image_private_name-w-dashes')
end
it 'pushes docker_image[localhost:5043/someara/name.w.dots]' do
expect(chef_run).to push_docker_image('localhost:5043/someara/name.w.dots')
end
it 'pushes docker_image[localhost:5043/someara/name.w.dots] with tag v0.1.0' do
expect(chef_run).to push_docker_image('localhost:5043/someara/name.w.dots').with(
tag: 'v0.1.0'
)
end
it 'login docker_registry[localhost:5043]' do
expect(chef_run).to login_docker_registry('localhost:5043').with(
username: 'testuser',
password: 'testpassword',
email: 'alice@computers.biz'
)
end
it 'creates file[/marker_image_private_name.w.dots]' do
expect(chef_run).to create_file('/marker_image_private_name.w.dots')
end
end
context 'testing pulling from public Dockerhub after being authenticated to a private one' do
it 'pulls docker_image[fedora]' do
expect(chef_run).to pull_docker_image('fedora')
end
end
end

View File

@ -1,140 +0,0 @@
require 'spec_helper'
describe 'docker_test::installation_package' do
cached(:chef_run) do
ChefSpec::SoloRunner.new(platform: 'ubuntu',
version: '18.04',
step_into: ['docker_installation_package']).converge(described_recipe)
end
context 'testing default action, default properties' do
it 'installs docker' do
expect(chef_run).to create_docker_installation_package('default').with(version: '18.06.0')
end
end
# Coverage of all recent docker versions
# To ensure test coverage and backwards compatibility
# With the frequent changes in package naming convention
# List generated from
# https://download.docker.com/linux/ubuntu/dists/#{distro}/stable/binary-amd64/Packages
context 'version strings for Ubuntu 18.04' do
cached(:chef_run) do
ChefSpec::SoloRunner.new(platform: 'ubuntu',
version: '18.04',
step_into: ['docker_installation_package']).converge(described_recipe)
end
[
# Bionic
{ docker_version: '18.03.1', expected: '18.03.1~ce~3-0~ubuntu' },
{ docker_version: '18.06.0', expected: '18.06.0~ce~3-0~ubuntu' },
{ docker_version: '18.06.1', expected: '18.06.1~ce~3-0~ubuntu' },
{ docker_version: '18.09.0', expected: '5:18.09.0~3-0~ubuntu-bionic' },
].each do |suite|
it 'generates the correct version string ubuntu bionic' do
custom_resource = chef_run.docker_installation_package('default')
actual = custom_resource.version_string(suite[:docker_version])
expect(actual).to eq(suite[:expected])
end
end
end
context 'version strings for Ubuntu 16.04' do
cached(:chef_run) do
ChefSpec::SoloRunner.new(platform: 'ubuntu',
version: '16.04',
step_into: ['docker_installation_package']).converge(described_recipe)
end
[
{ docker_version: '17.03.0', expected: '17.03.0~ce-0~ubuntu-xenial' },
{ docker_version: '17.03.1', expected: '17.03.1~ce-0~ubuntu-xenial' },
{ docker_version: '17.03.2', expected: '17.03.2~ce-0~ubuntu-xenial' },
{ docker_version: '17.03.3', expected: '17.03.3~ce-0~ubuntu-xenial' },
{ docker_version: '17.06.0', expected: '17.06.0~ce-0~ubuntu' },
{ docker_version: '17.06.1', expected: '17.06.1~ce-0~ubuntu' },
{ docker_version: '17.09.0', expected: '17.09.0~ce-0~ubuntu' },
{ docker_version: '17.09.1', expected: '17.09.1~ce-0~ubuntu' },
{ docker_version: '17.12.0', expected: '17.12.0~ce-0~ubuntu' },
{ docker_version: '17.12.1', expected: '17.12.1~ce-0~ubuntu' },
{ docker_version: '18.03.0', expected: '18.03.0~ce-0~ubuntu' },
{ docker_version: '18.03.1', expected: '18.03.1~ce-0~ubuntu' },
{ docker_version: '18.06.0', expected: '18.06.0~ce~3-0~ubuntu' },
{ docker_version: '18.06.1', expected: '18.06.1~ce~3-0~ubuntu' },
{ docker_version: '18.09.0', expected: '5:18.09.0~3-0~ubuntu-xenial' },
].each do |suite|
it 'generates the correct version string ubuntu xenial' do
custom_resource = chef_run.docker_installation_package('default')
actual = custom_resource.version_string(suite[:docker_version])
expect(actual).to eq(suite[:expected])
end
end
end
context 'version strings for Debian 9.5' do
cached(:chef_run) do
ChefSpec::SoloRunner.new(platform: 'debian',
version: '9.5',
step_into: ['docker_installation_package']).converge(described_recipe)
end
[
{ docker_version: '17.03.0', expected: '17.03.0~ce-0~debian-stretch' },
{ docker_version: '17.03.1', expected: '17.03.1~ce-0~debian-stretch' },
{ docker_version: '17.03.2', expected: '17.03.2~ce-0~debian-stretch' },
{ docker_version: '17.03.3', expected: '17.03.3~ce-0~debian-stretch' },
{ docker_version: '17.06.0', expected: '17.06.0~ce-0~debian' },
{ docker_version: '17.06.1', expected: '17.06.1~ce-0~debian' },
{ docker_version: '17.09.0', expected: '17.09.0~ce-0~debian' },
{ docker_version: '17.09.1', expected: '17.09.1~ce-0~debian' },
{ docker_version: '17.12.0', expected: '17.12.0~ce-0~debian' },
{ docker_version: '17.12.1', expected: '17.12.1~ce-0~debian' },
{ docker_version: '18.03.0', expected: '18.03.0~ce-0~debian' },
{ docker_version: '18.03.1', expected: '18.03.1~ce-0~debian' },
{ docker_version: '18.06.0', expected: '18.06.0~ce~3-0~debian' },
{ docker_version: '18.06.1', expected: '18.06.1~ce~3-0~debian' },
{ docker_version: '18.09.0', expected: '5:18.09.0~3-0~debian-stretch' },
].each do |suite|
it 'generates the correct version string debian stretch' do
custom_resource = chef_run.docker_installation_package('default')
actual = custom_resource.version_string(suite[:docker_version])
expect(actual).to eq(suite[:expected])
end
end
end
context 'version strings for Centos 7' do
cached(:chef_run) do
ChefSpec::SoloRunner.new(platform: 'centos',
version: '7',
step_into: ['docker_installation_package']).converge(described_recipe)
end
# https://download.docker.com/linux/centos/7/x86_64/stable/Packages/
[
{ docker_version: '17.03.0', expected: '17.03.0.ce-1.el7.centos' },
{ docker_version: '17.03.1', expected: '17.03.1.ce-1.el7.centos' },
{ docker_version: '17.03.2', expected: '17.03.2.ce-1.el7.centos' },
{ docker_version: '17.03.3', expected: '17.03.3.ce-1.el7' },
{ docker_version: '17.06.0', expected: '17.06.0.ce-1.el7.centos' },
{ docker_version: '17.06.1', expected: '17.06.1.ce-1.el7.centos' },
{ docker_version: '17.09.0', expected: '17.09.0.ce-1.el7.centos' },
{ docker_version: '17.09.1', expected: '17.09.1.ce-1.el7.centos' },
{ docker_version: '17.12.0', expected: '17.12.0.ce-1.el7.centos' },
{ docker_version: '17.12.1', expected: '17.12.1.ce-1.el7.centos' },
{ docker_version: '18.03.0', expected: '18.03.0.ce-1.el7.centos' },
{ docker_version: '18.03.1', expected: '18.03.1.ce-1.el7.centos' },
{ docker_version: '18.06.0', expected: '18.06.0.ce-3.el7' },
{ docker_version: '18.06.1', expected: '18.06.1.ce-3.el7' },
{ docker_version: '18.09.0', expected: '18.09.0-3.el7' },
].each do |suite|
it 'generates the correct version string centos 7' do
custom_resource = chef_run.docker_installation_package('default')
actual = custom_resource.version_string(suite[:docker_version])
expect(actual).to eq(suite[:expected])
end
end
end
end

View File

@ -1,174 +0,0 @@
require 'spec_helper'
describe 'docker_test::network' do
cached(:chef_run) { ChefSpec::SoloRunner.new(platform: 'ubuntu', version: '16.04').converge(described_recipe) }
context 'creates a network with unicode name' do
it 'creates docker_network_seseme_straße' do
expect(chef_run).to create_docker_network('seseme_straße')
end
end
context 'creates a network with defaults' do
it 'creates docker_network_a' do
expect(chef_run).to create_docker_network('network_a')
end
it 'creates echo-base-network_a' do
expect(chef_run).to run_docker_container('echo-base-network_a')
end
it 'creates echo-station-network_a' do
expect(chef_run).to run_docker_container('echo-station-network_a')
end
end
context 'when testing network deletion' do
it 'creates network_b with the CLI' do
expect(chef_run).to run_execute('create network_b').with(
command: 'docker network create network_b'
)
end
it 'creates /marker_delete_network_b' do
expect(chef_run).to create_file('/marker_delete_network_b')
end
it 'deletes docker_network[network_b]' do
expect(chef_run).to delete_docker_network('network_b')
end
end
context 'creates a network with subnet and gateway' do
it 'creates docker_network_c' do
expect(chef_run).to create_docker_network('network_c').with(
subnet: '192.168.88.0/24',
gateway: '192.168.88.1'
)
end
it 'creates echo-base-network_c' do
expect(chef_run).to run_docker_container('echo-base-network_c')
end
it 'creates echo-station-network_c' do
expect(chef_run).to run_docker_container('echo-station-network_c')
end
end
context 'creates a network with aux_address' do
it 'creates docker_network_d' do
expect(chef_run).to create_docker_network('network_d').with(
subnet: '192.168.89.0/24',
gateway: '192.168.89.1',
aux_address: ['a=192.168.89.2', 'b=192.168.89.3']
)
end
it 'creates echo-base-network_d' do
expect(chef_run).to run_docker_container('echo-base-network_d')
end
it 'creates echo-station-network_d' do
expect(chef_run).to run_docker_container('echo-station-network_d')
end
end
context 'creates a network with overlay driver' do
it 'creates network_e' do
expect(chef_run).to create_docker_network('network_e').with(
driver: 'overlay'
)
end
end
context 'creates a network with an ip-range' do
it 'creates docker_network_f' do
expect(chef_run).to create_docker_network('network_f').with(
driver: 'bridge',
subnet: '172.28.0.0/16',
gateway: '172.28.5.254',
ip_range: '172.28.5.0/24'
)
end
it 'creates echo-base-network_f' do
expect(chef_run).to run_docker_container('echo-base-network_f')
end
it 'creates echo-station-network_f' do
expect(chef_run).to run_docker_container('echo-station-network_f')
end
end
context 'create an overlay network with multiple subnets' do
it 'creates docker_network_g' do
expect(chef_run).to create_docker_network('network_g').with(
driver: 'overlay',
subnet: ['192.168.0.0/16', '192.170.0.0/16'],
gateway: ['192.168.0.100', '192.170.0.100'],
ip_range: '192.168.1.0/24',
aux_address: ['a=192.168.1.5', 'b=192.168.1.6', 'a=192.170.1.5', 'b=192.170.1.6']
)
end
it 'creates echo-base-network_g' do
expect(chef_run).to run_docker_container('echo-base-network_g')
end
it 'creates echo-station-network_g' do
expect(chef_run).to run_docker_container('echo-station-network_g')
end
end
context 'connect and disconnect a container' do
it 'creates docker_network_h1' do
expect(chef_run).to create_docker_network('network_h1')
end
it 'creates docker_network_h2' do
expect(chef_run).to create_docker_network('network_h2')
end
it 'creates container1-network_h' do
expect(chef_run).to run_docker_container('container1-network_h')
end
it 'creates /marker/network_h' do
expect(chef_run).to create_file('/marker_network_h')
end
it 'connects container1-network_h with network_h2' do
expect(chef_run).to connect_docker_network('network_h2 connector').with(
container: 'container1-network_h'
)
end
it 'disconnects container1-network_h from network_h1' do
expect(chef_run).to disconnect_docker_network('network_h1 disconnector').with(
container: 'container1-network_h'
)
end
end
context 'ipv6 network' do
it 'creates docker_network_ipv6' do
expect(chef_run).to create_docker_network('network_ipv6').with(
enable_ipv6: true,
subnet: 'fd00:dead:beef::/48'
)
end
it 'creates docker_network_ipv4' do
expect(chef_run).to create_docker_network('network_ipv4')
end
end
context 'internal network' do
it 'creates docker_network_internal' do
expect(chef_run).to create_docker_network('network_internal').with(
internal: true
)
end
end
end

View File

@ -1,118 +0,0 @@
require 'spec_helper'
describe 'docker_test::plugin' do
cached(:chef_run) { ChefSpec::SoloRunner.new(platform: 'ubuntu', version: '16.04').converge(described_recipe) }
let(:sshfs_caps) do
[
{
'Name' => 'network',
'Value' => ['host'],
},
{
'Name' => 'mount',
'Value' => ['/var/lib/docker/plugins/'],
},
{
'Name' => 'mount',
'Value' => [''],
},
{
'Name' => 'device',
'Value' => ['/dev/fuse'],
},
{
'Name' => 'capabilities',
'Value' => ['CAP_SYS_ADMIN'],
},
]
end
context 'testing default action, default properties, but with privilege grant' do
it 'installs vieux/sshfs' do
expect(chef_run).to install_docker_plugin('vieux/sshfs').with(
api_retries: 3,
grant_privileges: sshfs_caps,
options: {},
remote_tag: 'latest'
)
end
end
context 'reconfigure existing plugin' do
it 'enables debug on vieux/sshfs' do
expect(chef_run).to update_docker_plugin('configure vieux/sshfs').with(
api_retries: 3,
grant_privileges: [],
options: {
'DEBUG' => '1',
},
local_alias: 'vieux/sshfs',
remote_tag: 'latest'
)
end
end
context 'testing the remove action' do
it 'removes vieux/sshfs' do
expect(chef_run).to remove_docker_plugin('remove vieux/sshfs').with(
api_retries: 3,
grant_privileges: [],
options: {},
local_alias: 'vieux/sshfs',
remote_tag: 'latest'
)
end
end
context 'testing configure and install at the same time' do
it 'installs wetopi/rbd' do
expect(chef_run).to install_docker_plugin('rbd').with(
remote: 'wetopi/rbd',
remote_tag: '1.0.1',
grant_privileges: true,
options: {
'LOG_LEVEL' => '4',
}
)
end
it 'removes wetopi/rbd again' do
expect(chef_run).to remove_docker_plugin('remove rbd').with(
local_alias: 'rbd'
)
end
end
context 'install is idempotent' do
it 'installs vieux/sshfs two times' do
expect(chef_run).to install_docker_plugin('sshfs 2.1').with(
remote: 'vieux/sshfs',
remote_tag: 'latest',
local_alias: 'sshfs',
grant_privileges: true
)
expect(chef_run).to install_docker_plugin('sshfs 2.2').with(
remote: 'vieux/sshfs',
remote_tag: 'latest',
local_alias: 'sshfs',
grant_privileges: true
)
end
end
context 'test :enable / :disable action' do
it 'enables sshfs' do
expect(chef_run).to enable_docker_plugin('enable sshfs').with(
local_alias: 'sshfs'
)
end
it 'disables sshfs' do
expect(chef_run).to disable_docker_plugin('disable sshfs').with(
local_alias: 'sshfs'
)
end
end
end

View File

@ -1,125 +0,0 @@
require 'spec_helper'
describe 'docker_test::registry' do
cached(:chef_run) { ChefSpec::SoloRunner.new(platform: 'ubuntu', version: '16.04').converge(described_recipe) }
before do
stub_command('/usr/bin/test -f /tmp/registry/tls/ca.pem').and_return(false)
stub_command('/usr/bin/test -f /tmp/registry/tls/ca-key.pem').and_return(false)
stub_command('/usr/bin/test -f /tmp/registry/tls/key.pem').and_return(false)
stub_command('/usr/bin/test -f /tmp/registry/tls/cert.pem').and_return(false)
stub_command('/usr/bin/test -f /tmp/registry/tls/server-key.pem').and_return(false)
stub_command('/usr/bin/test -f /tmp/registry/tls/server.pem').and_return(false)
stub_command('/usr/bin/test -f /tmp/registry/tls/client.csr').and_return(false)
stub_command('/usr/bin/test -f /tmp/registry/tls/server.csr').and_return(false)
stub_command("[ ! -z `docker ps -qaf 'name=registry_service$'` ]").and_return(false)
stub_command("[ ! -z `docker ps -qaf 'name=registry_proxy$'` ]").and_return(false)
stub_command('netstat -plnt | grep ":5000" && netstat -plnt | grep ":5043"').and_return(false)
end
context 'when compiling the recipe' do
it 'creates directory[/tmp/registry/tls]' do
expect(chef_run).to create_directory('/tmp/registry/tls').with(
recursive: true
)
end
it 'runs bash[creating private key for docker server]' do
expect(chef_run).to run_bash('creating private key for docker server')
end
it 'runs bash[generating CA private and public key]' do
expect(chef_run).to run_bash('generating CA private and public key')
end
it 'runs bash[generating certificate request for server]' do
expect(chef_run).to run_bash('generating certificate request for server')
end
it 'creates file[/tmp/registry/tls/server-extfile.cnf]' do
expect(chef_run).to create_file('/tmp/registry/tls/server-extfile.cnf')
end
it 'runs bash[signing request for server]' do
expect(chef_run).to run_bash('signing request for server')
end
it 'runs bash[creating private key for docker client]' do
expect(chef_run).to run_bash('creating private key for docker client')
end
it 'runs bash[generating certificate request for client]' do
expect(chef_run).to run_bash('generating certificate request for client')
end
it 'creates file[/tmp/registry/tls/client-extfile.cnf]' do
expect(chef_run).to create_file('/tmp/registry/tls/client-extfile.cnf')
end
it 'runs bash[signing request for client]' do
expect(chef_run).to run_bash('signing request for client')
end
it 'pulls docker_image[nginx]' do
expect(chef_run).to pull_docker_image('nginx').with(
tag: '1.9'
)
end
it 'pulls docker_image[registry]' do
expect(chef_run).to pull_docker_image('registry').with(
tag: '2.6.1'
)
end
it 'creates directory[/tmp/registry/auth]' do
expect(chef_run).to create_directory('/tmp/registry/auth').with(
recursive: true,
owner: 'root',
mode: '0755'
)
end
it 'creates template[/tmp/registry/auth/registry.conf]' do
expect(chef_run).to create_template('/tmp/registry/auth/registry.conf').with(
source: 'registry/auth/registry.conf.erb',
owner: 'root',
mode: '0755'
)
end
it 'runs execute[copy server cert for registry]' do
expect(chef_run).to run_execute('copy server cert for registry').with(
command: 'cp /tmp/registry/tls/server.pem /tmp/registry/auth/server.crt',
creates: '/tmp/registry/auth/server.crt'
)
end
it 'runs execute[copy server key for registry]' do
expect(chef_run).to run_execute('copy server key for registry').with(
command: 'cp /tmp/registry/tls/server-key.pem /tmp/registry/auth/server.key',
creates: '/tmp/registry/auth/server.key'
)
end
it 'creates template[/tmp/registry/auth/registry.password]' do
expect(chef_run).to create_template('/tmp/registry/auth/registry.password').with(
source: 'registry/auth/registry.password.erb',
owner: 'root',
mode: '0755'
)
end
it 'runs bash[start docker registry]' do
expect(chef_run).to run_bash('start docker registry')
end
it 'runs bash[start docker registry proxy]' do
expect(chef_run).to run_bash('start docker registry proxy')
end
it 'runs bash[wait for docker registry and proxy]' do
expect(chef_run).to run_bash('wait for docker registry and proxy')
end
end
end

View File

@ -1,55 +0,0 @@
require 'spec_helper'
require_relative '../../libraries/helpers_service'
describe 'docker_test::service' do
before do
allow_any_instance_of(DockerCookbook::DockerHelpers::Service).to receive(:installed_docker_version).and_return('18.06.0')
end
cached(:chef_run) do
ChefSpec::SoloRunner.new(platform: 'ubuntu',
version: '16.04',
step_into: %w(helpers_service docker_service docker_service_base docker_service_manager docker_service_manager_systemd)).converge(described_recipe)
end
# If you have to change this file you most likely updated a default service option
# Please note that it will require a docker service restart
# Which is consumer impacting
expected = <<EOH
[Unit]
Description=Docker Application Container Engine
Documentation=https://docs.docker.com
After=network-online.target docker.socket firewalld.service
Requires=docker.socket
Wants=network-online.target
[Service]
Type=notify
ExecStartPre=/sbin/sysctl -w net.ipv4.ip_forward=1
ExecStartPre=/sbin/sysctl -w net.ipv6.conf.all.forwarding=1
ExecStart=/usr/bin/dockerd --bip=10.10.10.0/24 --group=docker --default-address-pool=base=10.10.10.0/16,size=24 --pidfile=/var/run/docker.pid --storage-driver=overlay2
ExecStartPost=/usr/lib/docker/docker-wait-ready
ExecReload=/bin/kill -s HUP $MAINPID
LimitNOFILE=1048576
LimitNPROC=infinity
LimitCORE=infinity
TasksMax=infinity
TimeoutStartSec=0
Delegate=yes
KillMode=process
Restart=always
StartLimitBurst=3
StartLimitInterval=60s
[Install]
WantedBy=multi-user.target
EOH
it 'creates docker_service[default]' do
expect(chef_run).to render_file('/etc/systemd/system/docker.service').with_content { |content|
# For tests which run on windows - convert CRLF
expect(content.gsub(/[\r\n]+/m, "\n")).to match(expected.gsub(/[\r\n]+/m, "\n"))
}
end
end

View File

@ -1,47 +0,0 @@
require 'spec_helper'
describe 'docker_test::volume' do
cached(:chef_run) { ChefSpec::SoloRunner.new(platform: 'ubuntu', version: '16.04').converge(described_recipe) }
it 'pull_if_missing docker_image[alpine]' do
expect(chef_run).to pull_if_missing_docker_image('alpine').with(
tag: '3.1'
)
end
context 'testing remove action' do
it 'executes docker creates volume --name remove_me' do
expect(chef_run).to run_execute('docker volume create --name remove_me')
end
it 'creates file /marker_remove_me' do
expect(chef_run).to create_file('/marker_remove_me')
end
it 'removes docker_volume[remove_me]' do
expect(chef_run).to remove_docker_volume('remove_me')
end
end
context 'testing create action' do
it 'creates volume hello' do
expect(chef_run).to create_docker_volume('hello')
end
it 'creates volume hello again' do
expect(chef_run).to create_docker_volume('hello again').with(
volume_name: 'hello_again'
)
end
context 'testing create action' do
it 'runs file_writer' do
expect(chef_run).to run_if_missing_docker_container('file_writer')
end
it 'runs file_writer' do
expect(chef_run).to run_if_missing_docker_container('file_reader')
end
end
end
end

View File

@ -1,82 +0,0 @@
# require 'rspec'
# require 'rspec/its'
# require_relative '../libraries/helpers_container'
#
# class FakeContainerForTestingImageProperty
# include DockerCookbook::DockerHelpers::Container
#
# def initialize(attributes = {})
# @attributes = attributes
# end
#
# def repo(value = nil)
# attributes['repo'] = value if value
# attributes['repo']
# end
#
# def tag(value = nil)
# attributes['tag'] = value if value
# attributes['tag'] || 'latest'
# end
#
# private
#
# attr_reader :attributes
# end
#
# describe DockerCookbook::DockerHelpers::Container do
# let(:helper) { FakeContainerForTestingImageProperty.new }
#
# describe '#image' do
# subject { helper }
#
# context "If you say: repo 'blah'" do
# before { helper.repo 'blah' }
# its(:image) { is_expected.to eq('blah:latest') }
# end
#
# context "If you say: repo 'blah'; tag '3.1'" do
# before do
# helper.repo 'blah'
# helper.tag '3.1'
# end
# its(:image) { is_expected.to eq('blah:3.1') }
# end
#
# context "If you say: image 'blah'" do
# before { helper.image 'blah' }
# its(:repo) { is_expected.to eq('blah') }
# its(:tag) { is_expected.to eq('latest') }
# end
#
# context "If you say: image 'blah:3.1'" do
# before { helper.image 'blah:3.1' }
# its(:repo) { is_expected.to eq('blah') }
# its(:tag) { is_expected.to eq('3.1') }
# end
#
# context "If you say: image 'repo/blah'" do
# before { helper.image 'repo/blah' }
# its(:repo) { is_expected.to eq('repo/blah') }
# its(:tag) { is_expected.to eq('latest') }
# end
#
# context "If you say: image 'repo/blah:3.1'" do
# before { helper.image 'repo/blah:3.1' }
# its(:repo) { is_expected.to eq('repo/blah') }
# its(:tag) { is_expected.to eq('3.1') }
# end
#
# context "If you say: image 'repo:1337/blah'" do
# before { helper.image 'repo:1337/blah' }
# its(:repo) { is_expected.to eq('repo:1337/blah') }
# its(:tag) { is_expected.to eq('latest') }
# end
#
# context "If you say: image 'repo:1337/blah:3.1'" do
# before { helper.image 'repo:1337/blah:3.1' }
# its(:repo) { is_expected.to eq('repo:1337/blah') }
# its(:tag) { is_expected.to eq('3.1') }
# end
# end
# end

View File

@ -1,49 +0,0 @@
# require 'rspec'
# require_relative '../libraries/helpers_network'
#
# describe Class.new { include DockerCookbook::DockerHelpers::Network } do
# subject(:helper) { Class.new { include DockerCookbook::DockerHelpers::Network } }
# let(:subnets) do
# %w(
# 192.168.0.0/24
# )
# end
#
# let(:ip_ranges) do
# %w(
# 192.168.0.31/28
# )
# end
#
# let(:gateways) do
# %w(
# 192.168.0.34
# )
# end
#
# let(:aux_addresses) do
# %w(
# foo=192.168.0.34
# bar=192.168.0.124
# )
# end
#
# describe '#consolidate_ipam' do
# subject { described_class.new.consolidate_ipam(subnets, ip_ranges, gateways, aux_addresses) }
# it 'should have a subnet' do
# expect(subject).to include(include('Subnet' => '192.168.0.0/24'))
# end
#
# it 'should have aux address' do
# expect(subject).to include(include('AuxiliaryAddresses' => { 'foo' => '192.168.0.34', 'bar' => '192.168.0.124' }))
# end
#
# it 'should have gateways' do
# expect(subject).to include(include('Gateway' => '192.168.0.34'))
# end
#
# it 'should have ip range' do
# expect(subject).to include(include('IPRange' => '192.168.0.31/28'))
# end
# end
# end

View File

@ -1,55 +0,0 @@
require 'spec_helper'
require 'docker'
require_relative '../../libraries/docker_base'
require_relative '../../libraries/docker_container'
describe DockerCookbook::DockerContainer do
let(:resource) { DockerCookbook::DockerContainer.new('hello_world') }
it 'has a default action of [:run]' do
expect(resource.action).to eql([:run])
end
describe 'gets ip_address_from_container_networks' do
let(:options) { { 'id' => rand(10_000).to_s } }
subject do
Docker::Container.send(:new, Docker.connection, options)
end
# https://docs.docker.com/engine/api/version-history/#v121-api-changes
context 'when docker API < 1.21' do
let(:ip_address) { '10.0.0.1' }
let(:options) do
{
'id' => rand(10_000).to_s,
'IPAddress' => ip_address,
}
end
it 'gets ip_address as nil' do
actual = resource.ip_address_from_container_networks(subject)
expect { resource.ip_address_from_container_networks(subject) }.not_to raise_error
expect(actual).to eq(nil)
end
end
context 'when docker API > 1.21' do
let(:ip_address) { '10.0.0.1' }
let(:options) do
{
'id' => rand(10_000).to_s,
'NetworkSettings' => {
'Networks' => {
'bridge' => {
'IPAMConfig' => {
'IPv4Address' => ip_address,
},
},
},
},
}
end
it 'gets ip_address' do
actual = resource.ip_address_from_container_networks(subject)
expect(actual).to eq(ip_address)
end
end
end
end

View File

@ -1,126 +0,0 @@
require 'spec_helper'
require 'chef'
require 'excon'
require_relative '../../libraries/docker_base'
require_relative '../../libraries/docker_container'
describe 'docker_container' do
step_into :docker_container
platform 'ubuntu'
# Info returned by docker api
# https://docs.docker.com/engine/api/v1.39/#tag/Container
let(:container) do
{
'Id' => '123456789',
'IPAddress' => '10.0.0.1',
'Image' => 'ubuntu:bionic',
'Names' => ['/hello_world'],
'Config' => { 'Labels' => {} },
'HostConfig' => { 'RestartPolicy' => { 'Name' => 'unless-stopped',
'MaximumRetryCount' => 1 },
'Binds' => [],
'ReadonlyRootfs' => false },
'State' => 'not running',
'Warnings' => [],
}.to_json
end
# https://docs.docker.com/engine/api/v1.39/#tag/Image
let(:image) do
{ 'Id' => 'bf119e2',
'Repository' => 'ubuntu', 'Tag' => 'bionic',
'Created' => 1_364_102_658, 'Size' => 24_653,
'VirtualSize' => 180_116_135,
'Config' => { 'Labels' => {} } }.to_json
end
# https://docs.docker.com/engine/api/v1.39/#operation/SystemInfo
let(:info) do
{ 'Labels' => {} }.to_json
end
# https://docs.docker.com/engine/api/v1.39/#operation/ContainerCreate
let(:create) do
{
'Id' => 'e90e34656806',
'Warnings' => [],
}.to_json
end
before do
# Ensure docker api calls are mocked
# It is low level much easier to do in Excon
# Plus, the low level mock allows testing this cookbook
# for multiple docker apis and docker-api gems
# https://github.com/excon/excon#stubs
Excon.defaults[:mock] = true
Excon.stub({ method: :get, path: '/v1.16/containers/hello_world/json' }, body: container, status: 200)
Excon.stub({ method: :get, path: '/v1.16/images/ubuntu:bionic/json' }, body: image, status: 200)
Excon.stub({ method: :get, path: '/v1.16/info' }, body: info, status: 200)
Excon.stub({ method: :delete, path: '/v1.16/containers/123456789' }, body: '', status: 200)
Excon.stub({ method: :post, path: '/v1.16/containers/create' }, body: create, status: 200)
Excon.stub({ method: :get, path: '/v1.16/containers/123456789/start' }, body: '', status: 200)
end
context 'creates a docker container with default options' do
recipe do
docker_container 'hello_world' do
tag 'ubuntu:latest'
action :create
end
end
it {
expect { chef_run }.to_not raise_error
expect(chef_run).to create_docker_container('hello_world').with(
tag: 'ubuntu:latest',
create_options: { 'name' => 'hello_world', 'Image' => 'hello_world:ubuntu:latest', 'Labels' => {}, 'Cmd' => nil, 'AttachStderr' => false, 'AttachStdin' => false, 'AttachStdout' => false, 'Domainname' => '', 'Entrypoint' => nil, 'Env' => [], 'ExposedPorts' => {}, 'Hostname' => nil, 'MacAddress' => nil, 'NetworkDisabled' => false, 'OpenStdin' => false, 'StdinOnce' => false, 'Tty' => false, 'User' => '', 'Volumes' => {}, 'WorkingDir' => '', 'HostConfig' => { 'Binds' => nil, 'CapAdd' => nil, 'CapDrop' => nil, 'CgroupParent' => '', 'CpuShares' => 0, 'CpusetCpus' => '', 'Devices' => [], 'Dns' => [], 'DnsSearch' => [], 'ExtraHosts' => nil, 'IpcMode' => '', 'Init' => nil, 'KernelMemory' => 0, 'Links' => nil, 'LogConfig' => nil, 'Memory' => 0, 'MemorySwap' => 0, 'MemorySwappiness' => 0, 'MemoryReservation' => 0, 'NetworkMode' => 'bridge', 'OomKillDisable' => false, 'OomScoreAdj' => -500, 'Privileged' => false, 'PidMode' => '', 'PortBindings' => {}, 'PublishAllPorts' => false, 'RestartPolicy' => { 'Name' => nil, 'MaximumRetryCount' => 0 }, 'ReadonlyRootfs' => false, 'Runtime' => 'runc', 'SecurityOpt' => nil, 'Sysctls' => {}, 'Ulimits' => nil, 'UsernsMode' => '', 'UTSMode' => '', 'VolumesFrom' => nil, 'VolumeDriver' => nil }, 'NetworkingConfig' => { 'EndpointsConfig' => { 'bridge' => { 'IPAMConfig' => { 'IPv4Address' => nil }, 'Aliases' => [] } } } }
)
}
end
context 'creates a docker container with healthcheck options' do
recipe do
docker_container 'hello_world' do
tag 'ubuntu:latest'
health_check(
'Test' =>
[
'string',
],
'Interval' => 0,
'Timeout' => 0,
'Retries' => 0,
'StartPeriod' => 0
)
action :create
end
end
it {
expect { chef_run }.to_not raise_error
expect(chef_run).to create_docker_container('hello_world').with(
tag: 'ubuntu:latest',
create_options: { 'name' => 'hello_world', 'Image' => 'hello_world:ubuntu:latest', 'Labels' => {}, 'Cmd' => nil, 'AttachStderr' => false, 'AttachStdin' => false, 'AttachStdout' => false, 'Domainname' => '', 'Entrypoint' => nil, 'Env' => [], 'ExposedPorts' => {}, 'Hostname' => nil, 'MacAddress' => nil, 'NetworkDisabled' => false, 'OpenStdin' => false, 'StdinOnce' => false, 'Tty' => false, 'User' => '', 'Volumes' => {}, 'WorkingDir' => '', 'HostConfig' => { 'Binds' => nil, 'CapAdd' => nil, 'CapDrop' => nil, 'CgroupParent' => '', 'CpuShares' => 0, 'CpusetCpus' => '', 'Devices' => [], 'Dns' => [], 'DnsSearch' => [], 'ExtraHosts' => nil, 'IpcMode' => '', 'Init' => nil, 'KernelMemory' => 0, 'Links' => nil, 'LogConfig' => nil, 'Memory' => 0, 'MemorySwap' => 0, 'MemorySwappiness' => 0, 'MemoryReservation' => 0, 'NetworkMode' => 'bridge', 'OomKillDisable' => false, 'OomScoreAdj' => -500, 'Privileged' => false, 'PidMode' => '', 'PortBindings' => {}, 'PublishAllPorts' => false, 'RestartPolicy' => { 'Name' => nil, 'MaximumRetryCount' => 0 }, 'ReadonlyRootfs' => false, 'Runtime' => 'runc', 'SecurityOpt' => nil, 'Sysctls' => {}, 'Ulimits' => nil, 'UsernsMode' => '', 'UTSMode' => '', 'VolumesFrom' => nil, 'VolumeDriver' => nil }, 'NetworkingConfig' => { 'EndpointsConfig' => { 'bridge' => { 'IPAMConfig' => { 'IPv4Address' => nil }, 'Aliases' => [] } } }, 'Healthcheck' => { 'Test' => ['string'], 'Interval' => 0, 'Timeout' => 0, 'Retries' => 0, 'StartPeriod' => 0 } }
)
}
end
context 'creates a docker container with default options for windows' do
platform 'windows'
recipe do
docker_container 'hello_world' do
tag 'ubuntu:latest'
action :create
end
end
it {
expect { chef_run }.to_not raise_error
expect(chef_run).to create_docker_container('hello_world').with(
tag: 'ubuntu:latest',
# Should be missing 'MemorySwappiness'
create_options: { 'name' => 'hello_world', 'Image' => 'hello_world:ubuntu:latest', 'Labels' => {}, 'Cmd' => nil, 'AttachStderr' => false, 'AttachStdin' => false, 'AttachStdout' => false, 'Domainname' => '', 'Entrypoint' => nil, 'Env' => [], 'ExposedPorts' => {}, 'Hostname' => nil, 'MacAddress' => nil, 'NetworkDisabled' => false, 'OpenStdin' => false, 'StdinOnce' => false, 'Tty' => false, 'User' => '', 'Volumes' => {}, 'WorkingDir' => '', 'HostConfig' => { 'Binds' => nil, 'CapAdd' => nil, 'CapDrop' => nil, 'CgroupParent' => '', 'CpuShares' => 0, 'CpusetCpus' => '', 'Devices' => [], 'Dns' => [], 'DnsSearch' => [], 'ExtraHosts' => nil, 'IpcMode' => '', 'Init' => nil, 'KernelMemory' => 0, 'Links' => nil, 'LogConfig' => nil, 'Memory' => 0, 'MemorySwap' => 0, 'MemoryReservation' => 0, 'NetworkMode' => 'bridge', 'OomKillDisable' => false, 'OomScoreAdj' => -500, 'Privileged' => false, 'PidMode' => '', 'PortBindings' => {}, 'PublishAllPorts' => false, 'RestartPolicy' => { 'Name' => nil, 'MaximumRetryCount' => 0 }, 'ReadonlyRootfs' => false, 'Runtime' => 'runc', 'SecurityOpt' => nil, 'Sysctls' => {}, 'Ulimits' => nil, 'UsernsMode' => '', 'UTSMode' => '', 'VolumesFrom' => nil, 'VolumeDriver' => nil }, 'NetworkingConfig' => { 'EndpointsConfig' => { 'bridge' => { 'IPAMConfig' => { 'IPv4Address' => nil }, 'Aliases' => [] } } } }
)
}
end
end

View File

@ -1,27 +0,0 @@
require 'spec_helper'
require_relative '../../libraries/docker_base'
require_relative '../../libraries/docker_image_prune'
describe DockerCookbook::DockerImagePrune do
let(:resource) { DockerCookbook::DockerImagePrune.new('rspec') }
it 'has a default action of [:prune]' do
expect(resource.action).to eql([:prune])
end
it 'generates filter json' do
# Arrange
expected = '{"filters":["dangling=true","until=1h30m","label=com.example.vendor=ACME","label!=no_prune"]}'
resource.dangling = true
resource.prune_until = '1h30m'
resource.with_label = 'com.example.vendor=ACME'
resource.without_label = 'no_prune'
resource.action :prune
# Act
actual = resource.generate_json(resource)
# Assert
expect(actual).to eq(expected)
end
end

View File

@ -1,88 +0,0 @@
require 'spec_helper'
require_relative '../../libraries/docker_base'
require_relative '../../libraries/docker_registry'
describe 'docker_registry' do
step_into :docker_registry
platform 'ubuntu'
# Info returned by docker api
# https://docs.docker.com/engine/api/v1.39/#section/Authentication
let(:auth) do
{
'identitytoken' => '9cbafc023786cd7...',
}.to_json
end
before do
# Ensure docker api calls are mocked
# It is low level much easier to do in Excon
# Plus, the low level mock allows testing this cookbook
# for multiple docker apis and docker-api gems
# https://github.com/excon/excon#stubs
Excon.defaults[:mock] = true
Excon.stub({ method: :post, path: '/v1.16/auth' }, body: auth, status: 200)
end
context 'logs into a docker registry with default options' do
recipe do
docker_registry 'chefspec_custom_registry' do
email 'chefspec_email'
password 'chefspec_password'
username 'chefspec_username'
end
end
it {
expect { chef_run }.to_not raise_error
expect(chef_run).to login_docker_registry('chefspec_custom_registry').with(
email: 'chefspec_email',
password: 'chefspec_password',
username: 'chefspec_username',
host: nil
)
}
end
context 'logs into a docker registry with host' do
recipe do
docker_registry 'chefspec_custom_registry' do
email 'chefspec_email'
password 'chefspec_password'
username 'chefspec_username'
host 'chefspec_host'
end
end
it {
expect { chef_run }.to_not raise_error
expect(chef_run).to login_docker_registry('chefspec_custom_registry').with(
email: 'chefspec_email',
password: 'chefspec_password',
username: 'chefspec_username',
host: 'chefspec_host'
)
}
end
context 'logs into a docker registry with host environment variable' do
recipe do
docker_registry 'chefspec_custom_registry' do
email 'chefspec_email'
password 'chefspec_password'
username 'chefspec_username'
end
end
it {
# Set the environment variable
stub_const 'ENV', ENV.to_h.merge('DOCKER_HOST' => 'chefspec_host_environment_variable')
expect { chef_run }.to_not raise_error
expect(chef_run).to login_docker_registry('chefspec_custom_registry').with(
email: 'chefspec_email',
password: 'chefspec_password',
username: 'chefspec_username',
host: 'chefspec_host_environment_variable'
)
}
end
end

View File

@ -1,21 +0,0 @@
require 'chefspec'
require 'chefspec/berkshelf'
class RSpecHelper
class<<self
attr_accessor :current_example
end
def self.reset!
@current_example = nil
end
end
RSpec.configure do |config|
config.filter_run focus: true
config.run_all_when_everything_filtered = true
config.before :each do
RSpecHelper.reset!
RSpecHelper.current_example = self
end
end

View File

@ -1,35 +0,0 @@
# CHANGELOG for docker_test
This file is used to list changes made in each version of docker_test.
## 0.5.1:
* Bugfix: Test docker_image :build for both file and directory source
## 0.5.0:
* Bugfix: Switch docker@0.25.0 deprecated dockerfile container LWRP attribute to source
## 0.4.0:
* Bugfix: Remove deprecated public_port in container_lwrp
* Bugfix: Add `init_type false` for busybox test containers
* Enhancement: Add tduffield/testcontainerd image, container, and tests
## 0.3.0:
* Enhancement: Change Dockerfile FROM to already downloaded busybox image instead of ubuntu
## 0.2.0:
* Added container_lwrp recipe
* Removed default recipe from image_lwrp recipe
## 0.1.0:
* Initial release of docker_test
- - -
Check the [Markdown Syntax Guide](http://daringfireball.net/projects/markdown/syntax) for help with Markdown.
The [Github Flavored Markdown page](http://github.github.com/github-flavored-markdown/) describes the differences between markdown on github and standard markdown.

View File

@ -1,2 +0,0 @@
FROM busybox
RUN /bin/echo 'hello from image_1'

View File

@ -1,4 +0,0 @@
FROM busybox
ADD foo.txt /tmp/foo.txt
RUN /bin/echo 'hello from image_2'
VOLUME /home

View File

@ -1,32 +0,0 @@
# Create a docker image that takes a long time to build
# Centos as a base image. Any should work for the for loop test, but
# CentOS is needed for the yum test.
# Note that pulling the base image will not trigger a
# timeout, regardless of how long it
# takes
FROM centos
# Simply wait for wait for 30 minutes, output a status update every 10 seconds
# This does not appear to trigger the timeout problem
# RUN [ "bash", "-c", "for minute in {1..30} ; do for second in {0..59..10} ; do echo -n \" $minute:$second \" ; sleep 10 ; done ; done" ]
# This triggers the timeout.
# Sleep for 5 minutes, 3 times.
# RUN [ "bash", "-c", "for minute in {0..10..5} ; do echo -n \" $minute \" ; sleep 300 ; done" ]
# Let's try this next.
# Sleep for 1 minutes, 15 time
RUN [ "bash", "-c", "for minute in {0..15} ; do echo -n \" $minute \" ; sleep 60 ; done" ]
# This should trigger the timeout unless you have a very fast Internet connection.
# RUN \
# curl -SL https://dl.fedoraproject.org/pub/epel/epel-release-latest-7.noarch.rpm -o epel.rpm \
# && yum install -y epel.rpm \
# && rm epel.rpm \
# && yum install -y \
# zarafa \
# supervisor \
# && yum clean all \
# && rm -rf /usr/share/man /etc/httpd/conf.d/ssl.conf

View File

@ -1,2 +0,0 @@
FROM alpine:3.1
RUN /bin/echo 'hello from image_3'

View File

@ -1,9 +0,0 @@
name 'docker_test'
maintainer 'Sean OMeara'
maintainer_email 'sean@sean.io'
license 'Apache-2.0'
description 'installs a buncha junk'
version '0.6.0'
depends 'docker'
depends 'etcd'

View File

@ -1,21 +0,0 @@
################
# Docker service
################
docker_service 'default' do
host 'unix:///var/run/docker.sock'
install_method 'auto'
service_manager 'auto'
action [:create, :start]
end
docker_image 'alpine' do
action :pull
end
docker_container 'an_echo_server' do
repo 'alpine'
command 'nc -ll -p 7 -e /bin/cat'
port '7:7'
action :run
end

View File

@ -1,145 +0,0 @@
################
# Setting up TLS
################
caname = 'docker_service_default'
caroot = "/ca/#{caname}"
directory caroot.to_s do
recursive true
action :create
end
# Self signed CA
bash "#{caname} - generating CA private and public key" do
cmd = 'openssl req'
cmd += ' -x509'
cmd += ' -nodes'
cmd += ' -days 365'
cmd += ' -sha256'
cmd += " -subj '/CN=kitchen2docker/'"
cmd += ' -newkey rsa:4096'
cmd += " -keyout #{caroot}/ca-key.pem"
cmd += " -out #{caroot}/ca.pem"
cmd += ' 2>&1>/dev/null'
code cmd
not_if "/usr/bin/test -f #{caroot}/ca-key.pem"
not_if "/usr/bin/test -f #{caroot}/ca.pem"
action :run
end
# server certs
bash "#{caname} - creating private key for docker server" do
code "openssl genrsa -out #{caroot}/server-key.pem 4096"
not_if "/usr/bin/test -f #{caroot}/server-key.pem"
action :run
end
bash "#{caname} - generating certificate request for server" do
cmd = 'openssl req'
cmd += ' -new'
cmd += ' -sha256'
cmd += " -subj '/CN=#{node['hostname']}/'"
cmd += " -key #{caroot}/server-key.pem"
cmd += " -out #{caroot}/server.csr"
code cmd
only_if "/usr/bin/test -f #{caroot}/server-key.pem"
not_if "/usr/bin/test -f #{caroot}/server.csr"
action :run
end
file "#{caroot}/server-extfile.cnf" do
content "subjectAltName = IP:#{node['ipaddress']},IP:127.0.0.1\n"
action :create
end
bash "#{caname} - signing request for server" do
cmd = 'openssl x509'
cmd += ' -req'
cmd += ' -days 365'
cmd += ' -sha256'
cmd += " -CA #{caroot}/ca.pem"
cmd += " -CAkey #{caroot}/ca-key.pem"
cmd += ' -CAcreateserial'
cmd += " -in #{caroot}/server.csr"
cmd += " -out #{caroot}/server.pem"
cmd += " -extfile #{caroot}/server-extfile.cnf"
not_if "/usr/bin/test -f #{caroot}/server.pem"
code cmd
action :run
end
# client certs
bash "#{caname} - creating private key for docker client" do
code "openssl genrsa -out #{caroot}/key.pem 4096"
not_if "/usr/bin/test -f #{caroot}/key.pem"
action :run
end
bash "#{caname} - generating certificate request for client" do
cmd = 'openssl req'
cmd += ' -new'
cmd += " -subj '/CN=client/'"
cmd += " -key #{caroot}/key.pem"
cmd += " -out #{caroot}/client.csr"
code cmd
only_if "/usr/bin/test -f #{caroot}/key.pem"
not_if "/usr/bin/test -f #{caroot}/client.csr"
action :run
end
file "#{caroot}/client-extfile.cnf" do
content "extendedKeyUsage = clientAuth\n"
action :create
end
bash "#{caname} - signing request for client" do
cmd = 'openssl x509'
cmd += ' -req'
cmd += ' -days 365'
cmd += ' -sha256'
cmd += " -CA #{caroot}/ca.pem"
cmd += " -CAkey #{caroot}/ca-key.pem"
cmd += ' -CAcreateserial'
cmd += " -in #{caroot}/client.csr"
cmd += " -out #{caroot}/cert.pem"
cmd += " -extfile #{caroot}/client-extfile.cnf"
code cmd
not_if "/usr/bin/test -f #{caroot}/cert.pem"
action :run
end
################
# Etcd service
################
etcd_service 'etcd0' do
advertise_client_urls "http://#{node['ipaddress']}:2379,http://0.0.0.0:4001"
listen_client_urls 'http://0.0.0.0:2379,http://0.0.0.0:4001'
initial_advertise_peer_urls "http://#{node['ipaddress']}:2380"
listen_peer_urls 'http://0.0.0.0:2380'
initial_cluster_token 'etcd0'
initial_cluster "etcd0=http://#{node['ipaddress']}:2380"
initial_cluster_state 'new'
action [:create, :start]
end
################
# Docker service
################
docker_service 'default' do
host ['unix:///var/run/docker.sock', 'tcp://127.0.0.1:2376']
version node['docker']['version']
labels ['environment:test', 'foo:bar']
tls_verify true
tls_ca_cert "#{caroot}/ca.pem"
tls_server_cert "#{caroot}/server.pem"
tls_server_key "#{caroot}/server-key.pem"
tls_client_cert "#{caroot}/cert.pem"
tls_client_key "#{caroot}/key.pem"
cluster_store "etcd://#{node['ipaddress']}:4001"
cluster_advertise "#{node['ipaddress']}:4001"
install_method 'package'
action [:create, :start]
end

View File

@ -1,25 +0,0 @@
docker_image 'busybox' do
action :pull_if_missing
end
docker_container 'busybox_exec' do
repo 'busybox'
command 'sh -c "trap exit 0 SIGTERM; while :; do sleep 1; done"'
end
docker_exec 'touch_it' do
container 'busybox_exec'
command ['touch', '/tmp/onefile']
timeout 120
not_if { ::File.exist?('/marker_busybox_exec_onefile') }
end
file '/marker_busybox_exec_onefile'
docker_exec 'poke_it' do
container 'busybox_exec'
cmd ['touch', '/tmp/twofile']
not_if { ::File.exist?('/marker_busybox_exec_twofile') }
end
file '/marker_busybox_exec_twofile'

View File

@ -1,317 +0,0 @@
# Two variables, one recipe.
caname = 'docker_service_default'
caroot = "/ca/#{caname}"
#########################
# :pull_if_missing, :pull
#########################
# default action, default properties
docker_image 'hello-world'
# non-default name attribute, containing a single quote
docker_image "Tom's container" do
repo 'tduffield/testcontainerd'
end
# :pull action specified
docker_image 'busybox' do
action :pull
end
# :pull_if_missing
docker_image 'debian' do
action :pull_if_missing
end
# specify a tag and read/write timeouts
docker_image 'alpine' do
tag '3.1'
read_timeout 60
write_timeout 60
end
# host override
docker_image 'alpine-localhost' do
repo 'alpine'
tag '2.7'
host 'tcp://127.0.0.1:2376'
tls_verify true
tls_ca_cert "#{caroot}/ca.pem"
tls_client_cert "#{caroot}/cert.pem"
tls_client_key "#{caroot}/key.pem"
end
#########
# :remove
#########
# install something so it can be used to test the :remove action
execute 'pull vbatts/slackware' do
command 'docker pull vbatts/slackware ; touch /marker_image_slackware'
creates '/marker_image_slackware'
action :run
end
docker_image 'vbatts/slackware' do
action :remove
end
########
# :save
########
docker_image 'save hello-world' do
repo 'hello-world'
destination '/hello-world.tar'
not_if { ::File.exist?('/hello-world.tar') }
action :save
end
########
# :load
########
docker_image 'cirros' do
action :pull
not_if { ::File.exist?('/marker_load_cirros-1') }
end
docker_image 'save cirros' do
repo 'cirros'
destination '/cirros.tar'
not_if { ::File.exist?('/cirros.tar') }
action :save
end
docker_image 'remove cirros' do
repo 'cirros'
not_if { ::File.exist?('/marker_load_cirros-1') }
action :remove
end
docker_image 'load cirros' do
source '/cirros.tar'
not_if { ::File.exist?('/marker_load_cirros-1') }
action :load
end
file '/marker_load_cirros-1' do
action :create
end
###########################
# :build
###########################
# Build from a Dockerfile
directory '/usr/local/src/container1' do
action :create
end
cookbook_file '/usr/local/src/container1/Dockerfile' do
source 'Dockerfile_1'
action :create
end
docker_image 'someara/image-1' do
tag 'v0.1.0'
source '/usr/local/src/container1/Dockerfile'
force true
not_if { ::File.exist?('/marker_image_image-1') }
action :build
end
file '/marker_image_image-1' do
action :create
end
# Build from a directory
directory '/usr/local/src/container2' do
action :create
end
file '/usr/local/src/container2/foo.txt' do
content 'Dockerfile_2 contains ADD for this file'
action :create
end
cookbook_file '/usr/local/src/container2/Dockerfile' do
source 'Dockerfile_2'
action :create
end
docker_image 'someara/image.2' do
tag 'v0.1.0'
source '/usr/local/src/container2'
action :build_if_missing
end
# Build from a tarball
cookbook_file '/usr/local/src/image_3.tar' do
source 'image_3.tar'
action :create
end
docker_image 'image_3' do
tag 'v0.1.0'
source '/usr/local/src/image_3.tar'
action :build_if_missing
end
#########
# :import
#########
docker_image 'hello-again' do
tag 'v0.1.0'
source '/hello-world.tar'
action :import
end
################
# :tag and :push
################
######################
# This commented out section was manually tested by replacing the
# authentication creds with real live Dockerhub creds.
#####################
# docker_registry 'https://index.docker.io/v1/' do
# username 'youthere'
# password 'p4sswh1rr3d'
# email 'youthere@computers.biz'
# end
# # name-w-dashes
# docker_tag 'public dockerhub someara/name-w-dashes:v1.0.1' do
# target_repo 'hello-again'
# target_tag 'v0.1.0'
# to_repo 'someara/name-w-dashes'
# to_tag 'latest'
# action :tag
# end
# docker_image 'push someara/name-w-dashes' do
# repo 'someara/name-w-dashes'
# not_if { ::File.exist?('/marker_image_public_name-w-dashes') }
# action :push
# end
# file '/marker_image_public_name-w-dashes' do
# action :create
# end
# # name.w.dots
# docker_tag 'public dockerhub someara/name.w.dots:latest' do
# target_repo 'busybox'
# target_tag 'latest'
# to_repo 'someara/name.w.dots'
# to_tag 'latest'
# action :tag
# end
# docker_image 'push someara/name.w.dots' do
# repo 'someara/name.w.dots'
# not_if { ::File.exist?('/marker_image_public_name.w.dots') }
# action :push
# end
# file '/marker_image_public_name.w.dots' do
# action :create
# end
# # private-repo-test
# docker_tag 'public dockerhub someara/private-repo-test:v1.0.1' do
# target_repo 'hello-world'
# target_tag 'latest'
# to_repo 'someara/private-repo-test'
# to_tag 'latest'
# action :tag
# end
# docker_image 'push someara/private-repo-test' do
# repo 'someara/private-repo-test'
# not_if { ::File.exist?('/marker_image_public_private-repo-test') }
# action :push
# end
# file '/marker_image_public_private-repo-test' do
# action :create
# end
# docker_image 'someara/private-repo-test'
# public images
docker_image 'someara/name-w-dashes'
docker_image 'someara/name.w.dots'
##################
# Private registry
##################
include_recipe 'docker_test::registry'
# for pushing to private repo
docker_tag 'private repo tag for name-w-dashes:v1.0.1' do
target_repo 'hello-again'
target_tag 'v0.1.0'
to_repo 'localhost:5043/someara/name-w-dashes'
to_tag 'latest'
action :tag
end
# for pushing to private repo
docker_tag 'private repo tag for name.w.dots' do
target_repo 'busybox'
target_tag 'latest'
to_repo 'localhost:5043/someara/name.w.dots'
to_tag 'latest'
action :tag
end
docker_tag 'private repo tag for name.w.dots v0.1.0' do
target_repo 'busybox'
target_tag 'latest'
to_repo 'localhost:5043/someara/name.w.dots'
to_tag 'v0.1.0'
action :tag
end
docker_registry 'localhost:5043' do
username 'testuser'
password 'testpassword'
email 'alice@computers.biz'
end
docker_image 'localhost:5043/someara/name-w-dashes' do
not_if { ::File.exist?('/marker_image_private_name-w-dashes') }
action :push
end
file '/marker_image_private_name-w-dashes' do
action :create
end
docker_image 'localhost:5043/someara/name.w.dots' do
not_if { ::File.exist?('/marker_image_private_name.w.dots') }
action :push
end
docker_image 'localhost:5043/someara/name.w.dots' do
not_if { ::File.exist?('/marker_image_private_name.w.dots') }
tag 'v0.1.0'
action :push
end
file '/marker_image_private_name.w.dots' do
action :create
end
# Pull from the public Dockerhub after being authenticated to a
# private one
docker_image 'fedora' do
action :pull
end

View File

@ -1,15 +0,0 @@
#########################
# :prune
#########################
docker_image_prune 'hello-world' do
dangling true
end
docker_image_prune 'prune-old-images' do
dangling true
prune_until '1h30m'
with_label 'com.example.vendor=ACME'
without_label 'no_prune'
action :prune
end

View File

@ -1,4 +0,0 @@
docker_installation_package 'default' do
version '18.06.0'
action :create
end

View File

@ -1,4 +0,0 @@
docker_installation_script 'default' do
repo node['docker']['repo']
action :create
end

View File

@ -1,4 +0,0 @@
docker_installation_tarball 'default' do
version node['docker']['version']
action :create
end

View File

@ -1,251 +0,0 @@
# pull alpine image
docker_image 'alpine' do
tag '3.1'
action :pull_if_missing
end
# unicode characters
docker_network 'seseme_straße' do
action :create
end
###########
# network_a
###########
# defaults
docker_network 'network_a' do
action :create
end
# docker run --net=
docker_container 'echo-base-network_a' do
repo 'alpine'
tag '3.1'
command 'sh -c "trap exit 0 SIGTERM; while :; do sleep 1; done"'
port '1337'
network_mode 'network_a'
action :run
end
docker_container 'echo-station-network_a' do
repo 'alpine'
tag '3.1'
command 'sh -c "trap exit 0 SIGTERM; while :; do sleep 1; done"'
port '31337'
network_mode 'network_a'
action :run
end
############
# network_b
############
execute 'create network_b' do
command 'docker network create network_b'
not_if { ::File.exist?('/marker_delete_network_b') }
end
file '/marker_delete_network_b' do
action :create
end
# Delete a network
docker_network 'network_b' do
action :delete
end
###########
# network_c
###########
# specify subnet and gateway
docker_network 'network_c' do
subnet '192.168.88.0/24'
gateway '192.168.88.1'
action :create
end
# docker run --net=
docker_container 'echo-base-network_c' do
repo 'alpine'
tag '3.1'
command 'sh -c "trap exit 0 SIGTERM; while :; do sleep 1; done"'
port '1337'
network_mode 'network_c'
action :run
end
docker_container 'echo-station-network_c' do
repo 'alpine'
tag '3.1'
command 'sh -c "trap exit 0 SIGTERM; while :; do sleep 1; done"'
port '31337'
network_mode 'network_c'
action :run
end
###########
# network_d
###########
# create a network with aux_address
docker_network 'network_d' do
subnet '192.168.89.0/24'
gateway '192.168.89.1'
aux_address ['a=192.168.89.2', 'b=192.168.89.3']
end
docker_container 'echo-base-network_d' do
repo 'alpine'
tag '3.1'
command 'sh -c "trap exit 0 SIGTERM; while :; do sleep 1; done"'
port '1337'
network_mode 'network_d'
action :run
end
docker_container 'echo-station-network_d' do
repo 'alpine'
tag '3.1'
command 'sh -c "trap exit 0 SIGTERM; while :; do sleep 1; done"'
port '31337'
network_mode 'network_d'
action :run
end
###########
# network_e
###########
# specify overlay driver
docker_network 'network_e' do
driver 'overlay'
action :create
end
###########
# network_f
###########
# create a network with an ip-range
docker_network 'network_f' do
driver 'bridge'
subnet '172.28.0.0/16'
gateway '172.28.5.254'
ip_range '172.28.5.0/24'
end
docker_container 'echo-base-network_f' do
repo 'alpine'
tag '3.1'
command 'sh -c "trap exit 0 SIGTERM; while :; do sleep 1; done"'
port '1337'
network_mode 'network_f'
ip_address '172.28.5.5'
action :run
end
docker_container 'echo-station-network_f' do
repo 'alpine'
tag '3.1'
command 'sh -c "trap exit 0 SIGTERM; while :; do sleep 1; done"'
port '31337'
network_mode 'network_f'
action :run
end
###########
# network_g
###########
# create an overlay network with multiple subnets
docker_network 'network_g' do
driver 'overlay'
subnet ['192.168.0.0/16', '192.170.0.0/16']
gateway ['192.168.0.100', '192.170.0.100']
ip_range '192.168.1.0/24'
aux_address ['a=192.168.1.5', 'b=192.168.1.6', 'a=192.170.1.5', 'b=192.170.1.6']
end
docker_container 'echo-base-network_g' do
repo 'alpine'
tag '3.1'
command 'sh -c "trap exit 0 SIGTERM; while :; do sleep 1; done"'
port '1337'
network_mode 'network_g'
action :run
end
docker_container 'echo-station-network_g' do
repo 'alpine'
tag '3.1'
command 'sh -c "trap exit 0 SIGTERM; while :; do sleep 1; done"'
port '31337'
network_mode 'network_g'
action :run
end
###########
# network_h
###########
# connect same container to multiple networks
docker_network 'network_h1' do
action :create
end
docker_network 'network_h2' do
action :create
end
docker_container 'container1-network_h' do
repo 'alpine'
tag '3.1'
network_mode 'network_h1'
command 'sh -c "trap exit 0 SIGTERM; while :; do sleep 1; done"'
not_if { ::File.exist?('/marker_network_h') }
action :run
end
file '/marker_network_h' do
action :create
end
docker_network 'network_h2 connector' do
container 'container1-network_h'
network_name 'network_h2'
action :connect
end
# disconnet from a network
docker_network 'network_h1 disconnector' do
container 'container1-network_h'
network_name 'network_h1'
action :disconnect
end
##############
# network_ipv6
##############
# IPv6 enabled network
docker_network 'network_ipv6' do
enable_ipv6 true
subnet 'fd00:dead:beef::/48'
action :create
end
##############
# network_ipv4
##############
docker_network 'network_ipv4' do
action :create
end
##################
# network_internal
##################
docker_network 'network_internal' do
internal true
action :create
end

View File

@ -1,94 +0,0 @@
######################
# :install and :update
######################
sshfs_caps = [
{
'Name' => 'network',
'Value' => ['host'],
},
{
'Name' => 'mount',
'Value' => ['/var/lib/docker/plugins/'],
},
{
'Name' => 'mount',
'Value' => [''],
},
{
'Name' => 'device',
'Value' => ['/dev/fuse'],
},
{
'Name' => 'capabilities',
'Value' => ['CAP_SYS_ADMIN'],
},
]
docker_plugin 'vieux/sshfs' do
grant_privileges sshfs_caps
end
docker_plugin 'configure vieux/sshfs' do
action :update
local_alias 'vieux/sshfs'
options(
'DEBUG' => '1'
)
end
docker_plugin 'remove vieux/sshfs' do
local_alias 'vieux/sshfs'
action :remove
end
#######################
# :install with options
#######################
docker_plugin 'rbd' do
remote 'wetopi/rbd'
remote_tag '1.0.1'
grant_privileges true
options(
'LOG_LEVEL' => '4'
)
end
docker_plugin 'remove rbd' do
local_alias 'rbd'
action :remove
end
#######################################
# :install twice (should be idempotent)
#######################################
docker_plugin 'sshfs 2.1' do
local_alias 'sshfs'
remote 'vieux/sshfs'
remote_tag 'latest'
grant_privileges true
end
docker_plugin 'sshfs 2.2' do
local_alias 'sshfs'
remote 'vieux/sshfs'
remote_tag 'latest'
grant_privileges true
end
docker_plugin 'enable sshfs' do
local_alias 'sshfs'
action :enable
end
docker_plugin 'disable sshfs' do
local_alias 'sshfs'
action :disable
end
docker_plugin 'remove sshfs again' do
local_alias 'sshfs'
action :remove
end

View File

@ -1,192 +0,0 @@
# We're going to need some SSL certificates for testing.
caroot = '/tmp/registry/tls'
directory caroot.to_s do
recursive true
action :create
end
# Self signed CA
bash 'generating CA private and public key' do
cmd = 'openssl req'
cmd += ' -x509'
cmd += ' -nodes'
cmd += ' -days 365'
cmd += ' -sha256'
cmd += " -subj '/CN=kitchen2docker/'"
cmd += ' -newkey rsa:4096'
cmd += " -keyout #{caroot}/ca-key.pem"
cmd += " -out #{caroot}/ca.pem"
cmd += ' 2>&1>/dev/null'
code cmd
not_if "/usr/bin/test -f #{caroot}/ca-key.pem"
not_if "/usr/bin/test -f #{caroot}/ca.pem"
action :run
end
# server certs
bash 'creating private key for docker server' do
code "openssl genrsa -out #{caroot}/server-key.pem 4096"
not_if "/usr/bin/test -f #{caroot}/server-key.pem"
action :run
end
bash 'generating certificate request for server' do
cmd = 'openssl req'
cmd += ' -new'
cmd += ' -sha256'
cmd += " -subj '/CN=#{node['hostname']}/'"
cmd += " -key #{caroot}/server-key.pem"
cmd += " -out #{caroot}/server.csr"
code cmd
not_if "/usr/bin/test -f #{caroot}/server.csr"
action :run
end
file "#{caroot}/server-extfile.cnf" do
content "subjectAltName = IP:#{node['ipaddress']},IP:127.0.0.1\n"
action :create
end
bash 'signing request for server' do
cmd = 'openssl x509'
cmd += ' -req'
cmd += ' -days 365'
cmd += ' -sha256'
cmd += " -CA #{caroot}/ca.pem"
cmd += " -CAkey #{caroot}/ca-key.pem"
cmd += ' -CAcreateserial'
cmd += " -in #{caroot}/server.csr"
cmd += " -out #{caroot}/server.pem"
cmd += " -extfile #{caroot}/server-extfile.cnf"
not_if "/usr/bin/test -f #{caroot}/server.pem"
code cmd
action :run
end
# client certs
bash 'creating private key for docker client' do
code "openssl genrsa -out #{caroot}/key.pem 4096"
not_if "/usr/bin/test -f #{caroot}/key.pem"
action :run
end
bash 'generating certificate request for client' do
cmd = 'openssl req'
cmd += ' -new'
cmd += " -subj '/CN=client/'"
cmd += " -key #{caroot}/key.pem"
cmd += " -out #{caroot}/client.csr"
code cmd
not_if "/usr/bin/test -f #{caroot}/client.csr"
action :run
end
file "#{caroot}/client-extfile.cnf" do
content "extendedKeyUsage = clientAuth\n"
action :create
end
bash 'signing request for client' do
cmd = 'openssl x509'
cmd += ' -req'
cmd += ' -days 365'
cmd += ' -sha256'
cmd += " -CA #{caroot}/ca.pem"
cmd += " -CAkey #{caroot}/ca-key.pem"
cmd += ' -CAcreateserial'
cmd += " -in #{caroot}/client.csr"
cmd += " -out #{caroot}/cert.pem"
cmd += " -extfile #{caroot}/client-extfile.cnf"
code cmd
not_if "/usr/bin/test -f #{caroot}/cert.pem"
action :run
end
# Set up a test registry to test :push
# https://github.com/docker/distribution/blob/master/docs/authentication.md
#
docker_image 'nginx' do
tag '1.9'
end
docker_image 'registry' do
tag '2.6.1'
end
directory '/tmp/registry/auth' do
recursive true
owner 'root'
mode '0755'
action :create
end
template '/tmp/registry/auth/registry.conf' do
source 'registry/auth/registry.conf.erb'
owner 'root'
mode '0755'
action :create
end
# install certificates
execute 'copy server cert for registry' do
command "cp #{caroot}/server.pem /tmp/registry/auth/server.crt"
creates '/tmp/registry/auth/server.crt'
action :run
end
execute 'copy server key for registry' do
command "cp #{caroot}/server-key.pem /tmp/registry/auth/server.key"
creates '/tmp/registry/auth/server.key'
action :run
end
# testuser / testpassword
template '/tmp/registry/auth/registry.password' do
source 'registry/auth/registry.password.erb'
owner 'root'
mode '0755'
action :create
end
bash 'start docker registry' do
code <<-EOF
docker run \
-d \
-p 5000:5000 \
--name registry_service \
--restart=always \
registry:2
EOF
not_if "[ ! -z `docker ps -qaf 'name=registry_service$'` ]"
end
bash 'start docker registry proxy' do
code <<-EOF
docker run \
-d \
-p 5043:443 \
--name registry_proxy \
--restart=always \
-v /tmp/registry/auth/:/etc/nginx/conf.d \
nginx:1.9
EOF
not_if "[ ! -z `docker ps -qaf 'name=registry_proxy$'` ]"
end
bash 'wait for docker registry and proxy' do
code <<-EOF
i=0
tries=20
while true; do
((i++))
netstat -plnt | grep ":5000" && netstat -plnt | grep ":5043"
[ $? -eq 0 ] && break
[ $i -eq $tries ] && break
sleep 1
done
EOF
not_if 'netstat -plnt | grep ":5000" && netstat -plnt | grep ":5043"'
end

View File

@ -1,7 +0,0 @@
docker_service 'default' do
storage_driver 'overlay2'
bip '10.10.10.0/24'
default_ip_address_pool 'base=10.10.10.0/16,size=24'
service_manager 'systemd'
action [:create, :start]
end

View File

@ -1,84 +0,0 @@
#########################
# service named 'default'
#########################
docker_service 'default' do
install_method 'package'
graph '/var/lib/docker'
action [:create, :start]
end
################
# simple process
################
docker_image 'busybox' do
host 'unix:///var/run/docker.sock'
end
docker_container 'service default echo server' do
container_name 'an_echo_server'
repo 'busybox'
command 'nc -ll -p 7 -e /bin/cat'
port '7'
action :run
end
#####################
# squid forward proxy
#####################
directory '/etc/squid_forward_proxy' do
recursive true
owner 'root'
mode '0755'
action :create
end
template '/etc/squid_forward_proxy/squid.conf' do
source 'squid_forward_proxy/squid.conf.erb'
owner 'root'
mode '0755'
notifies :redeploy, 'docker_container[squid_forward_proxy]'
action :create
end
docker_image 'cbolt/squid' do
tag 'latest'
action :pull
end
docker_container 'squid_forward_proxy' do
repo 'cbolt/squid'
tag 'latest'
restart_policy 'on-failure'
kill_after 5
port '3128:3128'
command '/usr/sbin/squid -NCd1'
volumes '/etc/squid_forward_proxy/squid.conf:/etc/squid/squid.conf'
subscribes :redeploy, 'docker_image[cbolt/squid]'
action :run
end
#############
# service one
#############
docker_service 'one' do
graph '/var/lib/docker-one'
host 'unix:///var/run/docker-one.sock'
http_proxy 'http://127.0.0.1:3128'
https_proxy 'http://127.0.0.1:3128'
action :start
end
docker_image 'hello-world' do
host 'unix:///var/run/docker-one.sock'
tag 'latest'
end
docker_container 'hello-world' do
host 'unix:///var/run/docker-one.sock'
command '/hello'
action :create
end

View File

@ -1,35 +0,0 @@
# service
include_recipe 'docker_test::default'
# Build an image that takes longer than two minutes
# (the default read_timeout) to build
#
docker_image 'centos'
# Make sure that the image does not exist, to avoid a cache hit
# while building the docker image. This can legitimately fail
# if the image does not exist.
execute 'rmi kkeane/image.4' do
command 'docker rmi kkeane/image.4:chef'
ignore_failure true
action :run
end
directory '/usr/local/src/container4' do
action :create
end
cookbook_file '/usr/local/src/container4/Dockerfile' do
source 'Dockerfile_4'
action :create
end
docker_image 'timeout test image' do
repo 'kkeane/image.4'
read_timeout 3600 # 1 hour
write_timeout 3600 # 1 hour
tag 'chef'
source '/usr/local/src/container4'
action :build_if_missing
end

View File

@ -1,54 +0,0 @@
###########
# remove_me
###########
execute 'docker volume create --name remove_me' do
not_if { ::File.exist?('/marker_remove_me') }
action :run
end
file '/marker_remove_me' do
action :create
end
docker_volume 'remove_me' do
action :remove
end
#######
# hello
#######
docker_volume 'hello' do
action :create
end
docker_volume 'hello again' do
volume_name 'hello_again'
action :create
end
##################
# hello containers
##################
docker_image 'alpine' do
tag '3.1'
action :pull_if_missing
end
docker_container 'file_writer' do
repo 'alpine'
tag '3.1'
volumes ['hello:/hello']
command 'touch /hello/sean_was_here'
action :run_if_missing
end
docker_container 'file_reader' do
repo 'alpine'
tag '3.1'
volumes ['hello:/hello']
command 'ls /hello/sean_was_here'
action :run_if_missing
end

View File

@ -1,7 +0,0 @@
server {
resolver 8.8.8.8;
listen 8080;
location / {
proxy_pass http://$http_host$request_uri;
}
}

View File

@ -1,38 +0,0 @@
upstream docker-registry {
server <%= node['ipaddress'] %>:5000;
}
server {
listen 443 ssl;
server_name <%= node['ipaddress'] %>;
# Disable SSL for testing registry
ssl_certificate /etc/nginx/conf.d/server.crt;
ssl_certificate_key /etc/nginx/conf.d/server.key;
# disable any limits to avoid HTTP 413 for large image uploads
client_max_body_size 0;
# required to avoid HTTP 411: see Issue #1486 (https://github.com/docker/docker/issues/1486)
chunked_transfer_encoding on;
location /v2/ {
# Do not allow connections from docker 1.5 and earlier
# docker pre-1.6.0 did not properly set the user agent on ping, catch "Go *" user agents
if ($http_user_agent ~* "^(docker\/1\.(3|4|5(?!\.[0-9]-dev))|Go ).*\$" ) {
return 404;
}
# To add basic authentication to v2 use auth_basic setting plus add_header
auth_basic "registry.localhost";
auth_basic_user_file /etc/nginx/conf.d/registry.password;
add_header 'Docker-Distribution-Api-Version' 'registry/2.0' always;
proxy_pass http://docker-registry;
proxy_set_header Host $http_host; # required for docker client's sake
proxy_set_header X-Real-IP $remote_addr; # pass on real client's IP
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header X-Forwarded-Proto $scheme;
proxy_read_timeout 900;
}
}

View File

@ -1 +0,0 @@
testuser:$apr1$TPsqBp55$icazbv6goXik2yJVSlp7l1

View File

@ -1,44 +0,0 @@
http_port 3128
acl localnet src 10.0.0.0/8 # RFC1918 possible internal network
acl localnet src 172.16.0.0/12 # RFC1918 possible internal network
acl localnet src 192.168.0.0/16 # RFC1918 possible internal network
acl localnet src fc00::/7 # RFC 4193 local private network range
acl localnet src fe80::/10 # RFC 4291 link-local (directly plugged) machines
acl SSL_ports port 443
acl Safe_ports port 80 # http
acl Safe_ports port 21 # ftp
acl Safe_ports port 443 # https
acl Safe_ports port 70 # gopher
acl Safe_ports port 210 # wais
acl Safe_ports port 280 # http-mgmt
acl Safe_ports port 488 # gss-http
acl Safe_ports port 591 # filemaker
acl Safe_ports port 777 # multiling http
acl Safe_ports port 1025-65535 # unregistered ports
acl CONNECT method CONNECT
http_access deny !Safe_ports
http_access deny CONNECT !SSL_ports
http_access allow localhost manager
http_access deny manager
#
# INSERT YOUR OWN RULE(S) HERE TO ALLOW ACCESS FROM YOUR CLIENTS
#
http_access allow localnet
http_access allow localhost
http_access deny all
coredump_dir /squid/var/cache/squid
refresh_pattern ^ftp: 1440 20% 10080
refresh_pattern ^gopher: 1440 0% 1440
refresh_pattern -i (/cgi-bin/|\?) 0 0% 0
refresh_pattern . 0 20% 4320
# access_log /dev/stdout

View File

@ -1,10 +0,0 @@
if os[:name] == 'amazon'
describe command('/usr/bin/docker --version') do
its(:exit_status) { should eq 0 }
end
else
describe command('/usr/bin/docker --version') do
its(:exit_status) { should eq 0 }
its(:stdout) { should match(/18.06.0/) }
end
end

View File

@ -1,3 +0,0 @@
describe command('/usr/bin/docker --version') do
its(:exit_status) { should eq 0 }
end

View File

@ -1,3 +0,0 @@
describe command('/usr/bin/docker --version') do
its(:exit_status) { should eq 0 }
end

View File

@ -1,3 +0,0 @@
describe command('/usr/bin/docker --version') do
its(:exit_status) { should eq 0 }
end

View File

@ -1,4 +0,0 @@
describe command('/usr/bin/docker --version') do
its(:exit_status) { should eq 0 }
its(:stdout) { should match(/18.06.0/) }
end

View File

@ -1,277 +0,0 @@
###########
# reference
###########
# https://docs.docker.com/engine/reference/commandline/network_create/
###########
# network_a
###########
describe command("docker network ls -qf 'name=network_a$'") do
its(:exit_status) { should eq 0 }
its(:stdout) { should_not be_empty }
end
describe command('docker network inspect -f "{{ .Driver }}" network_a') do
its(:exit_status) { should eq 0 }
its(:stdout) { should eq "bridge\n" }
end
describe command('docker network inspect -f "{{ .Containers }}" network_a') do
its(:exit_status) { should eq 0 }
its(:stdout) { should match 'echo-station-network_a' }
end
describe command('docker network inspect -f "{{ .Containers }}" network_a') do
its(:exit_status) { should eq 0 }
its(:stdout) { should match 'echo-base-network_a' }
end
###########
# network_b
###########
describe command("docker network ls -qf 'name=network_b$'") do
its(:exit_status) { should eq 0 }
its(:stdout) { should be_empty }
end
###########
# network_c
###########
describe command("docker network ls -qf 'name=network_c$'") do
its(:exit_status) { should eq 0 }
its(:stdout) { should_not be_empty }
end
describe command('docker network inspect -f "{{ .Driver }}" network_c') do
its(:exit_status) { should eq 0 }
its(:stdout) { should eq "bridge\n" }
end
describe command('docker network inspect network_c') do
its(:exit_status) { should eq 0 }
its(:stdout) { should match(%r{192\.168\.88\.0/24}) }
its(:stdout) { should match(/192\.168\.88\.1/) }
end
describe command('docker network inspect -f "{{ .Containers }}" network_c') do
its(:exit_status) { should eq 0 }
its(:stdout) { should match 'echo-station-network_c' }
end
describe command('docker network inspect -f "{{ .Containers }}" network_c') do
its(:exit_status) { should eq 0 }
its(:stdout) { should match 'echo-base-network_c' }
end
###########
# network_d
###########
describe command("docker network ls -qf 'name=network_d$'") do
its(:exit_status) { should eq 0 }
its(:stdout) { should_not be_empty }
end
describe command('docker network inspect -f "{{ .Driver }}" network_d') do
its(:exit_status) { should eq 0 }
its(:stdout) { should eq "bridge\n" }
end
describe command('docker network inspect network_d') do
its(:exit_status) { should eq 0 }
its(:stdout) { should match(/a.*192\.168\.89\.2/) }
its(:stdout) { should match(/b.*192\.168\.89\.3/) }
end
###########
# network_e
###########
describe command("docker network ls -qf 'name=network_e$'") do
its(:exit_status) { should eq 0 }
its(:stdout) { should_not be_empty }
end
describe command('docker network inspect -f "{{ .Driver }}" network_e') do
its(:exit_status) { should eq 0 }
its(:stdout) { should eq "overlay\n" }
end
###########
# network_f
###########
describe command("docker network ls -qf 'name=network_f$'") do
its(:exit_status) { should eq 0 }
its(:stdout) { should_not be_empty }
end
describe command('docker network inspect -f "{{ .Driver }}" network_f') do
its(:exit_status) { should eq 0 }
its(:stdout) { should eq "bridge\n" }
end
describe command('docker network inspect network_f') do
its(:exit_status) { should eq 0 }
its(:stdout) { should match(%r{Subnet.*172\.28\.0\.0/16}) }
its(:stdout) { should match(%r{IPRange.*172\.28\.5\.0/24}) }
its(:stdout) { should match(/Gateway.*172\.28\.5\.254/) }
end
describe command('docker network inspect -f "{{ .Containers }}" network_f') do
its(:exit_status) { should eq 0 }
its(:stdout) { should match 'echo-station-network_f' }
end
describe command('docker network inspect -f "{{ .Containers }}" network_f') do
its(:exit_status) { should eq 0 }
its(:stdout) { should match 'echo-base-network_f' }
end
describe command('docker inspect -f "{{ .NetworkSettings.Networks.network_f.IPAddress }}" echo-base-network_f') do
its(:exit_status) { should eq 0 }
its(:stdout) { should match '172.28.5.5' }
end
###########
# network_g
###########
describe command("docker network ls -qf 'name=network_g$'") do
its(:exit_status) { should eq 0 }
its(:stdout) { should_not be_empty }
end
describe command('docker network inspect -f "{{ .Driver }}" network_g') do
its(:exit_status) { should eq 0 }
its(:stdout) { should eq "overlay\n" }
end
describe command('docker network inspect network_g') do
its(:exit_status) { should eq 0 }
its(:stdout) { should match(%r{Subnet.*192\.168\.0\.0/16}) }
its(:stdout) { should match(%r{IPRange.*192\.168\.1\.0/24}) }
its(:stdout) { should match(/Gateway.*192\.168\.0\.100/) }
its(:stdout) { should match(/a.*192\.168\.1\.5/) }
its(:stdout) { should match(/a.*192\.168\.1\.5/) }
its(:stdout) { should match(%r{Subnet.*192\.170\.0\.0/16}) }
its(:stdout) { should match(/Gateway.*192\.170\.0\.100/) }
its(:stdout) { should match(/a.*192\.170\.1\.5/) }
its(:stdout) { should match(/a.*192\.170\.1\.5/) }
end
describe command('docker network inspect -f "{{ .Containers }}" network_g') do
its(:exit_status) { should eq 0 }
its(:stdout) { should match 'echo-station-network_g' }
end
describe command('docker network inspect -f "{{ .Containers }}" network_g') do
its(:exit_status) { should eq 0 }
its(:stdout) { should match 'echo-base-network_g' }
end
###########
# network_h
###########
describe command("docker network ls -qf 'name=network_h1$'") do
its(:exit_status) { should eq 0 }
its(:stdout) { should_not be_empty }
end
describe command("docker network inspect -f '{{ range $c:=.Containers }}{{ $c.Name }}{{ end }}' network_h1") do
its(:exit_status) { should eq 0 }
its(:stdout) { should_not match 'container1-network_h' }
end
describe command("docker network inspect -f '{{ range $c:=.Containers }}{{ $c.Name }}{{ end }}' network_h2") do
its(:exit_status) { should eq 0 }
its(:stdout) { should match 'container1-network_h' }
end
##############
# network_ipv4
##############
describe command("docker network ls -qf 'name=network_ipv4$'") do
its(:exit_status) { should eq 0 }
its(:stdout) { should_not be_empty }
end
describe command("docker network inspect -f '{{ .EnableIPv6 }}' network_ipv4") do
its(:exit_status) { should eq 0 }
its(:stdout) { should match 'false' }
end
describe command("docker network inspect -f '{{ .Internal }}' network_ipv4") do
its(:exit_status) { should eq 0 }
its(:stdout) { should match 'false' }
end
##############
# network_ipv6
##############
describe command("docker network ls -qf 'name=network_ipv6$'") do
its(:exit_status) { should eq 0 }
its(:stdout) { should_not be_empty }
end
describe command("docker network inspect -f '{{ .EnableIPv6 }}' network_ipv6") do
its(:exit_status) { should eq 0 }
its(:stdout) { should match 'true' }
end
describe command("docker network inspect -f '{{ range $i:=.IPAM.Config }}{{ .Subnet | printf \"%s\\n\" }}{{ end }}' network_ipv6") do
its(:exit_status) { should eq 0 }
its(:stdout) { should include 'fd00:dead:beef::/48' }
end
##################
# network_internal
##################
describe command("docker network ls -qf 'name=network_internal'") do
its(:exit_status) { should eq 0 }
its(:stdout) { should_not be_empty }
end
describe command("docker network inspect -f '{{ .Internal }}' network_internal") do
its(:exit_status) { should eq 0 }
its(:stdout) { should match 'true' }
end
# describe command('docker network inspect test-network') do
# its(:exit_status) { should eq 0 }
# end
# describe command('docker network inspect test-network-overlay') do
# its(:exit_status) { should eq 0 }
# its(:stdout) { should match(/Driver.*overlay/) }
# end
# describe command('docker network inspect test-network-ip') do
# its(:exit_status) { should eq 0 }
# its(:stdout) { should match(%r{Subnet.*192\.168\.88\.0/24}) }
# its(:stdout) { should match(/Gateway.*192\.168\.88\.3/) }
# end
# describe command('docker network inspect test-network-aux') do
# its(:exit_status) { should eq 0 }
# its(:stdout) { should match(/a.*192\.168\.89\.4/) }
# its(:stdout) { should match(/b.*192\.168\.89\.5/) }
# end
# describe command('docker network inspect test-network-ip-range') do
# its(:exit_status) { should eq 0 }
# its(:stdout) { should match('asdf') }
# end
# describe command('docker network inspect test-network-connect') do
# its(:exit_status) { should eq 0 }
# its(:stdout) { should include(network_container['Id']) }
# end

View File

@ -1,962 +0,0 @@
volumes_filter = '{{ .Config.Volumes }}'
mounts_filter = '{{ .Mounts }}'
uber_options_network_mode = 'bridge'
##################################################
# test/cookbooks/docker_test/recipes/default.rb
##################################################
# docker_service[default]
describe docker.version do
its('Server.Version') { should eq '18.06.0-ce' }
end
describe command('docker info') do
its(:exit_status) { should eq 0 }
its(:stdout) { should match(/environment=/) }
its(:stdout) { should match(/foo=/) }
end
##############################################
# test/cookbooks/docker_test/recipes/image.rb
##############################################
# test/cookbooks/docker_test/recipes/image.rb
# docker_image[hello-world]
describe docker_image('hello-world:latest') do
it { should exist }
its('repo') { should eq 'hello-world' }
its('tag') { should eq 'latest' }
end
# docker_image[Tom's container]
describe docker_image('tduffield/testcontainerd:latest') do
it { should exist }
its('repo') { should eq 'tduffield/testcontainerd' }
its('tag') { should eq 'latest' }
end
# docker_image[busybox]
describe docker_image('busybox:latest') do
it { should exist }
its('repo') { should eq 'busybox' }
its('tag') { should eq 'latest' }
end
# docker_image[alpine]
describe docker_image('alpine:3.1') do
it { should exist }
its('repo') { should eq 'alpine' }
its('tag') { should eq '3.1' }
end
describe docker_image('alpine:2.7') do
it { should exist }
its('repo') { should eq 'alpine' }
its('tag') { should eq '2.7' }
end
# docker_image[vbatts/slackware]
describe docker_image('vbatts/slackware:latest') do
it { should_not exist }
its('repo') { should_not eq 'vbatts/slackware' }
its('tag') { should_not eq 'latest' }
end
# docker_image[save cirros]
describe file('/cirros.tar') do
it { should be_file }
its('mode') { should cmp '0644' }
end
# docker_image[load cirros]
describe docker_image('cirros:latest') do
it { should exist }
its('repo') { should eq 'cirros' }
its('tag') { should eq 'latest' }
end
# docker_image[someara/image-1]
describe docker_image('someara/image-1:v0.1.0') do
it { should exist }
its('repo') { should eq 'someara/image-1' }
its('tag') { should eq 'v0.1.0' }
end
# docker_image[someara/image.2]
describe docker_image('someara/image.2:v0.1.0') do
it { should exist }
its('repo') { should eq 'someara/image.2' }
its('tag') { should eq 'v0.1.0' }
end
# docker_image[image_3]
describe docker_image('image_3:v0.1.0') do
it { should exist }
its('repo') { should eq 'image_3' }
its('tag') { should eq 'v0.1.0' }
end
# docker_image[name-w-dashes]
describe docker_image('localhost:5043/someara/name-w-dashes:latest') do
it { should exist }
its('repo') { should eq 'localhost:5043/someara/name-w-dashes' }
its('tag') { should eq 'latest' }
end
# docker_tag[private repo tag for name.w.dots:latest / v0.1.0 / / v0.1.1 /]
describe docker_image('localhost:5043/someara/name.w.dots:latest') do
it { should exist }
its('repo') { should eq 'localhost:5043/someara/name.w.dots' }
its('tag') { should eq 'latest' }
end
describe docker_image('localhost:5043/someara/name.w.dots:v0.1.0') do
it { should exist }
its('repo') { should eq 'localhost:5043/someara/name.w.dots' }
its('tag') { should eq 'v0.1.0' }
end
# FIXME: We need to test the "docker_registry" stuff...
# I can't figure out how to search the local registry to see if the
# authentication and :push actions in the test recipe actually worked.
#
# Skipping for now.
##################################################
# test/cookbooks/docker_test/recipes/container.rb
##################################################
# docker_container[hello-world]
describe docker_container('hello-world') do
it { should exist }
it { should_not be_running }
end
# docker_container[busybox_ls]
describe docker_container('busybox_ls') do
it { should exist }
it { should_not be_running }
end
# docker_container[alpine_ls]
describe docker_container('alpine_ls') do
it { should exist }
it { should_not be_running }
end
# docker_container[an_echo_server]
describe docker_container('an_echo_server') do
it { should exist }
it { should be_running }
its('ports') { should eq '0.0.0.0:7->7/tcp' }
end
# docker_container[another_echo_server]
describe docker_container('another_echo_server') do
it { should exist }
it { should be_running }
its('ports') { should eq '0.0.0.0:32768->7/tcp' }
end
# docker_container[an_udp_echo_server]
describe docker_container('an_udp_echo_server') do
it { should exist }
it { should be_running }
its('ports') { should eq '0.0.0.0:5007->7/udp' }
end
# docker_container[multi_ip_port]
describe docker_container('multi_ip_port') do
it { should exist }
it { should be_running }
its('ports') { should eq '0.0.0.0:8301->8301/udp, 127.0.0.1:8500->8500/tcp, 127.0.1.1:8500->8500/tcp, 0.0.0.0:32769->8301/tcp' }
end
# docker_container[port_range]
describe command("docker inspect -f '{{ .HostConfig.PortBindings }}' port_range") do
its(:exit_status) { should eq 0 }
its(:stdout) { should include('2000/tcp:[{ }]') }
its(:stdout) { should include('2001/tcp:[{ }]') }
its(:stdout) { should include('2000/udp:[{ }]') }
its(:stdout) { should include('2001/udp:[{ }]') }
its(:stdout) { should include('3000/tcp:[{ }]') }
its(:stdout) { should include('3001/tcp:[{ }]') }
its(:stdout) { should include('8000/tcp:[{0.0.0.0 7000}]') }
its(:stdout) { should include('8001/tcp:[{0.0.0.0 7001}]') }
its(:stdout) { should include('8002/tcp:[{0.0.0.0 7002}]') }
end
# docker_container[bill]
describe docker_container('bill') do
it { should exist }
it { should_not be_running }
end
# docker_container[hammer_time]
describe docker_container('hammer_time') do
it { should exist }
it { should_not be_running }
end
describe command("docker ps -af 'name=hammer_time$'") do
its(:exit_status) { should eq 0 }
its(:stdout) { should match(/Exited/) }
end
# docker_container[red_light]
describe docker_container('red_light') do
it { should exist }
it { should be_running }
end
describe command("docker ps -af 'name=red_light$'") do
its(:exit_status) { should eq 0 }
its(:stdout) { should match(/Paused/) }
end
# docker_container[green_light]
describe docker_container('green_light') do
it { should exist }
it { should be_running }
end
# docker_container[quitter]
describe docker_container('quitter') do
it { should exist }
it { should be_running }
end
# docker_container[restarter]
describe docker_container('restarter') do
it { should exist }
it { should be_running }
end
# docker_container[deleteme]
describe docker_container('deleteme') do
it { should_not exist }
it { should_not be_running }
end
# docker_container[redeployer]
describe docker_container('redeployer') do
it { should exist }
it { should be_running }
end
# docker_container[unstarted_redeployer]
describe docker_container('unstarted_redeployer') do
it { should exist }
it { should_not be_running }
end
# docker_container[bind_mounter]
describe docker_container('bind_mounter') do
it { should exist }
it { should_not be_running }
end
describe command('docker inspect -f "{{ .HostConfig.Binds }}" bind_mounter') do
its(:exit_status) { should eq 0 }
its(:stdout) { should match(%r{\/hostbits\:\/bits}) }
its(:stdout) { should match(%r{\/more-hostbits\:\/more-bits}) }
its(:stdout) { should match(%r{\/winter\:\/spring\:ro}) }
end
# docker_container[binds_alias]
describe docker_container('binds_alias') do
it { should exist }
it { should_not be_running }
end
describe command('docker inspect -f "{{ .HostConfig.Binds }}" binds_alias') do
its(:exit_status) { should eq 0 }
its(:stdout) { should match(%r{\/fall\:\/sun}) }
its(:stdout) { should match(%r{\/winter\:\/spring\:ro}) }
end
describe command('docker inspect -f "{{ .Config.Volumes }}" binds_alias') do
its(:exit_status) { should eq 0 }
its(:stdout) { should match(%r{\/snow\:\{\}}) }
its(:stdout) { should match(%r{\/summer\:\{\}}) }
end
# docker_container[chef_container]
describe docker_container('chef_container') do
it { should exist }
it { should_not be_running }
end
describe command("docker inspect -f \"#{volumes_filter}\" chef_container") do
its(:exit_status) { should eq 0 }
its(:stdout) { should match(%r{\/opt\/chef\:}) }
end
# docker_container[ohai_debian]
describe docker_container('ohai_debian') do
it { should exist }
it { should_not be_running }
end
describe command('docker logs ohai_debian') do
its(:exit_status) { should eq 0 }
its(:stdout) { should match(/debian/) }
end
describe command("docker inspect -f \"#{mounts_filter}\" ohai_debian") do
its(:exit_status) { should eq 0 }
its(:stdout) { should match(%r{\/opt\/chef}) }
end
# docker_container[env]
describe docker_container('env') do
it { should exist }
it { should_not be_running }
end
describe command('docker inspect -f "{{ .Config.Env }}" env') do
its(:exit_status) { should eq 0 }
its(:stdout) { should match(%r{\[PATH=\/usr\/bin FOO=bar GOODBYE=TOMPETTY 1950=2017\]}) }
end
# docker_container[env_files]
describe docker_container('env_files') do
it { should exist }
it { should_not be_running }
end
describe command('docker inspect -f "{{ .Config.Env }}" env_files') do
its(:exit_status) { should eq 0 }
its(:stdout) { should match(/\[GOODBYE=TOMPETTY 1950=2017 HELLO=WORLD /) }
end
# docker_container[ohai_again]
describe docker_container('ohai_again') do
it { should exist }
it { should_not be_running }
end
describe command('docker logs ohai_again') do
its(:exit_status) { should eq 0 }
its(:stdout) { should match(/ohai_time/) }
end
# docker_container[cmd_test]
describe docker_container('cmd_test') do
it { should exist }
it { should_not be_running }
end
describe command('docker logs cmd_test') do
its(:exit_status) { should eq 0 }
its(:stdout) { should match(/.dockerenv/) }
end
# docker_container[sean_was_here]
describe docker_container('sean_was_here') do
it { should_not exist }
it { should_not be_running }
end
describe command('docker run --rm --volumes-from chef_container debian ls -la /opt/chef/') do
its(:exit_status) { should eq 0 }
its(:stdout) { should match(/sean_was_here-/) }
end
# docker_container[attached]
describe docker_container('attached') do
it { should exist }
it { should_not be_running }
end
describe command('docker run --rm --volumes-from chef_container debian ls -la /opt/chef/') do
its(:exit_status) { should eq 0 }
its(:stdout) { should match(/attached-\d{12}/) }
end
# docker_container[attached_with_timeout]
describe docker_container('attached_with_timeout') do
it { should exist }
it { should_not be_running }
end
describe command('docker run --rm --volumes-from chef_container debian ls -la /opt/chef/') do
its(:exit_status) { should eq 0 }
its(:stdout) { should_not match(/attached_with_timeout-\d{12}/) }
end
# docker_container[cap_add_net_admin]
describe docker_container('cap_add_net_admin') do
it { should exist }
it { should_not be_running }
end
describe command('docker logs cap_add_net_admin') do
its(:exit_status) { should eq 0 }
its(:stderr) { should_not match(/RTNETLINK answers: Operation not permitted/) }
end
# docker_container[cap_add_net_admin_error]
describe docker_container('cap_add_net_admin_error') do
it { should exist }
it { should_not be_running }
end
describe command('docker logs cap_add_net_admin_error') do
its(:exit_status) { should eq 0 }
its(:stderr) { should match(/RTNETLINK answers: Operation not permitted/) }
end
# docker_container[cap_drop_mknod]
describe docker_container('cap_drop_mknod') do
it { should exist }
it { should_not be_running }
end
describe command('docker logs cap_drop_mknod') do
its(:exit_status) { should eq 0 }
its(:stderr) { should match(%r{mknod: /dev/urandom2: Operation not permitted}) }
its(:stderr) { should match(%r{ls: cannot access '/dev/urandom2': No such file or directory}) }
end
# docker_container[cap_drop_mknod_error]
describe docker_container('cap_drop_mknod_error') do
it { should exist }
it { should_not be_running }
end
describe command('docker logs cap_drop_mknod_error') do
its(:exit_status) { should eq 0 }
its(:stderr) { should_not match(%r{mknod: '/dev/urandom2': Operation not permitted}) }
end
# docker_container[fqdn]
describe docker_container('fqdn') do
it { should exist }
it { should_not be_running }
end
describe command('docker logs fqdn') do
its(:exit_status) { should eq 0 }
its(:stdout) { should match(/computers.biz/) }
end
# docker_container[dns]
describe docker_container('dns') do
it { should exist }
it { should_not be_running }
end
describe command('docker inspect -f "{{ .HostConfig.Dns }}" dns') do
its(:stdout) { should match(/\[4.3.2.1 1.2.3.4\]/) }
end
# docker_container[extra_hosts]
describe docker_container('extra_hosts') do
it { should exist }
it { should_not be_running }
end
describe command('docker inspect -f "{{ .HostConfig.ExtraHosts }}" extra_hosts') do
its(:stdout) { should match(/\[east:4.3.2.1 west:1.2.3.4\]/) }
end
# docker_container[devices_sans_cap_sys_admin]
# describe command("docker ps -af 'name=devices_sans_cap_sys_admin$'") do
# its(:exit_status) { should eq 0 }
# its(:stdout) { should match(/Exited/) }
# end
# FIXME: find a method to test this that works across all platforms in test-kitchen
# Is this test invalid?
# describe command("md5sum /root/disk1") do
# its(:exit_status) { should eq 0 }
# its(:stdout) { should match(/0f343b0931126a20f133d67c2b018a3b/) }
# end
# docker_container[devices_with_cap_sys_admin]
# describe command("docker ps -af 'name=devices_with_cap_sys_admin$'") do
# its(:exit_status) { should eq 0 }
# its(:stdout) { should match(/Exited/) }
# end
# describe command('md5sum /root/disk1') do
# its(:exit_status) { should eq 0 }
# its(:stdout) { should_not match(/0f343b0931126a20f133d67c2b018a3b/) }
# end
# docker_container[cpu_shares]
describe docker_container('cpu_shares') do
it { should exist }
it { should_not be_running }
end
describe command("docker inspect -f '{{ .HostConfig.CpuShares }}' cpu_shares") do
its(:exit_status) { should eq 0 }
its(:stdout) { should match(/512/) }
end
# docker_container[cpuset_cpus]
describe docker_container('cpuset_cpus') do
it { should exist }
it { should_not be_running }
end
describe command("docker inspect -f '{{ .HostConfig.CpusetCpus }}' cpuset_cpus") do
its(:exit_status) { should eq 0 }
its(:stdout) { should match(/0,1/) }
end
# docker_container[try_try_again]
# FIXME: Find better tests
describe docker_container('try_try_again') do
it { should exist }
it { should_not be_running }
end
# docker_container[reboot_survivor]
describe command("docker ps -af 'name=reboot_survivor$'") do
its(:exit_status) { should eq 0 }
its(:stdout) { should_not match(/Exited/) }
end
# docker_container[reboot_survivor_retry]
describe docker_container('reboot_survivor_retry') do
it { should exist }
it { should be_running }
end
# docker_container[link_source]
describe docker_container('link_source') do
it { should exist }
it { should be_running }
end
# docker_container[link_source_2]
describe docker_container('link_source_2') do
it { should exist }
it { should be_running }
end
# docker_container[link_target_1]
describe docker_container('link_target_1') do
it { should exist }
it { should_not be_running }
end
describe command('docker logs link_target_1') do
its(:exit_status) { should eq 0 }
its(:stdout) { should_not match(/ping: bad address 'hello'/) }
end
# docker_container[link_target_2]
describe docker_container('link_target_2') do
it { should exist }
it { should_not be_running }
end
describe command('docker logs link_target_2') do
its(:exit_status) { should eq 0 }
its(:stdout) { should match(%r{HELLO_NAME=/link_target_2/hello}) }
end
# docker_container[link_target_3]
describe docker_container('link_target_3') do
it { should exist }
it { should_not be_running }
end
describe command('docker logs link_target_3') do
its(:exit_status) { should eq 0 }
its(:stdout) { should_not match(/ping: bad address 'hello_again'/) }
end
describe command("docker inspect -f '{{ .HostConfig.Links }}' link_target_3") do
its(:exit_status) { should eq 0 }
its(:stdout) { should match(%r{[/link_source:/link_target_3/hello /link_source_2:/link_target_3/hello_again]}) }
end
# docker_container[link_target_4]
describe docker_container('link_target_4') do
it { should exist }
it { should_not be_running }
end
describe command('docker logs link_target_4') do
its(:exit_status) { should eq 0 }
its(:stdout) { should match(%r{HELLO_NAME=/link_target_4/hello}) }
its(:stdout) { should match(%r{HELLO_AGAIN_NAME=/link_target_4/hello_again}) }
end
describe command("docker inspect -f '{{ .HostConfig.Links }}' link_target_4") do
its(:exit_status) { should eq 0 }
its(:stdout) { should match(%r{[/link_source:/link_target_4/hello /link_source_2:/link_target_4/hello_again]}) }
end
# docker_container[dangler]
# describe command('ls -la `cat /dangler_volpath`') do
# its(:exit_status) { should_not eq 0 }
# end
# FIXME: this changed with 1.8.x. Find a way to sanely test across various platforms
# docker_container[mutator]
describe docker_container('mutator') do
it { should exist }
it { should_not be_running }
end
describe file('/mutator.tar') do
it { should be_file }
its('mode') { should cmp '0644' }
end
# docker_container[network_mode]
describe docker_container('network_mode') do
it { should exist }
it { should be_running }
end
describe command("docker inspect -f '{{ .HostConfig.NetworkMode }}' network_mode") do
its(:exit_status) { should eq 0 }
its(:stdout) { should match(/host/) }
end
# docker_container[oom_kill_disable]
describe command("docker ps -af 'name=oom_kill_disable$'") do
its(:exit_status) { should eq 0 }
its(:stdout) { should match(/Exited \(0\)/) }
end
describe command("docker inspect --format '{{ .HostConfig.OomKillDisable }}' oom_kill_disable") do
its(:exit_status) { should eq 0 }
its(:stdout) { eq 'true' }
end
# docker_container[oom_score_adj]
describe command("docker ps -af 'name=oom_score_adj$'") do
its(:exit_status) { should eq 0 }
its(:stdout) { should match(/Exited \(0\)/) }
end
describe command("docker inspect --format '{{ .HostConfig.OomScoreAdj }}' oom_score_adj") do
its(:exit_status) { should eq 0 }
its(:stdout) { should match(/600/) }
end
# docker_container[ulimits]
describe docker_container('ulimits') do
it { should exist }
it { should be_running }
end
describe command("docker inspect -f '{{ .HostConfig.Ulimits }}' ulimits") do
its(:exit_status) { should eq 0 }
its(:stdout) { should match(/nofile=40960:40960 core=100000000:100000000 memlock=100000000:100000000/) }
end
# docker_container[uber_options]
describe docker_container('uber_options') do
it { should exist }
it { should be_running }
end
describe command("docker inspect -f '{{ .Config.Domainname }}' uber_options") do
its(:exit_status) { should eq 0 }
its(:stdout) { should match(/computers.biz/) }
end
describe command("docker inspect -f '{{ .Config.MacAddress }}' uber_options") do
its(:exit_status) { should eq 0 }
its(:stdout) { should match(/00:00:DE:AD:BE:EF/) }
end
describe command("docker inspect -f '{{ .HostConfig.Ulimits }}' uber_options") do
its(:exit_status) { should eq 0 }
its(:stdout) { should match(/nofile=40960:40960 core=100000000:100000000 memlock=100000000:100000000/) }
end
describe command("docker inspect -f '{{ .HostConfig.NetworkMode }}' uber_options") do
its(:exit_status) { should eq 0 }
its(:stdout) { should match(/#{uber_options_network_mode}/) }
end
# docker inspect returns the labels unsorted
describe command("docker inspect -f '{{ .Config.Labels }}' uber_options") do
its(:exit_status) { should eq 0 }
its(:stdout) { should match(/foo:bar/) }
its(:stdout) { should match(/hello:world/) }
end
# docker_container[overrides-1]
describe docker_container('overrides-1') do
it { should exist }
it { should be_running }
end
describe command('docker inspect -f "{{ .Config.User }}" overrides-1') do
its(:exit_status) { should eq 0 }
its(:stdout) { should match(/bob/) }
end
describe command('docker inspect -f "{{ .Config.Env }}" overrides-1') do
its(:exit_status) { should eq 0 }
its(:stdout) { should match(%r{[PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin FOO=foo BAR=bar BIZ=biz BAZ=baz]}) }
end
describe command('docker inspect -f "{{ .Config.Entrypoint }}" overrides-1') do
its(:exit_status) { should eq 0 }
its(:stdout) { should match(/\[\]/) }
end
describe command('docker inspect -f "{{ .Config.Cmd }}" overrides-1') do
its(:exit_status) { should eq 0 }
its(:stdout) { should match(%r{[ls -la /]}) }
end
describe command('docker inspect -f "{{ .Config.WorkingDir }}" overrides-1') do
its(:exit_status) { should eq 0 }
its(:stdout) { should match(%r{/var}) }
end
describe command('docker inspect -f "{{ .Config.Volumes }}" overrides-1') do
its(:exit_status) { should eq 0 }
its(:stdout) { should match(%r{map\[/home:{}\]}) }
end
# docker_container[overrides-2]
describe docker_container('overrides-2') do
it { should exist }
it { should be_running }
end
describe command('docker inspect -f "{{ .Config.User }}" overrides-2') do
its(:exit_status) { should eq 0 }
its(:stdout) { should match(/operator/) }
end
describe command('docker inspect -f "{{ .Config.Env }}" overrides-2') do
its(:exit_status) { should eq 0 }
its(:stdout) { should match(%r{[FOO=biz PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin BAR=bar BIZ=biz BAZ=baz]}) }
end
describe command('docker inspect -f "{{ .Config.Entrypoint }}" overrides-2') do
its(:exit_status) { should eq 0 }
its(:stdout) { should match(%r{[/bin/sh -c]}) }
end
describe command('docker inspect -f "{{ .Config.Cmd }}" overrides-2') do
its(:exit_status) { should eq 0 }
its(:stdout) { should match(%r{[ls -laR /]}) }
end
describe command('docker inspect -f "{{ .Config.WorkingDir }}" overrides-2') do
its(:exit_status) { should eq 0 }
its(:stdout) { should match(%r{/tmp}) }
end
# docker_container[syslogger]
describe docker_container('syslogger') do
it { should exist }
it { should be_running }
end
describe command("docker inspect -f '{{ .HostConfig.LogConfig.Type }}' syslogger") do
its(:exit_status) { should eq 0 }
its(:stdout) { should match(/syslog/) }
end
describe command("docker inspect -f '{{ .HostConfig.LogConfig.Config }}' syslogger") do
its(:exit_status) { should eq 0 }
its(:stdout) { should match(/tag:container-syslogger/) }
end
# docker_container[host_override]
describe docker_container('host_override') do
it { should exist }
it { should_not be_running }
end
# docker_container[kill_after]
describe docker_container('kill_after') do
it { should exist }
it { should_not be_running }
end
kill_after_start = command("docker inspect -f '{{.State.StartedAt}}' kill_after").stdout
kill_after_start = DateTime.parse(kill_after_start).to_time.to_i
kill_after_finish = command("docker inspect -f '{{.State.FinishedAt}}' kill_after").stdout
kill_after_finish = DateTime.parse(kill_after_finish).to_time.to_i
kill_after_run_time = kill_after_finish - kill_after_start
describe kill_after_run_time do
it { should be_within(5).of(1) }
end
# docker_container[pid_mode]
describe command("docker ps -af 'name=pid_mode$'") do
its(:exit_status) { should eq 0 }
its(:stdout) { should match(/Exited \(0\)/) }
end
describe command("docker inspect --format '{{ .HostConfig.PidMode }}' pid_mode") do
its(:exit_status) { should eq 0 }
its(:stdout) { eq 'host' }
end
# docker_container[init]
describe command("docker ps -af 'name=init$'") do
its(:exit_status) { should eq 0 }
its(:stdout) { should match(/Exited \(0\)/) }
end
describe command("docker inspect --format '{{ .HostConfig.Init }}' init") do
its(:exit_status) { should eq 0 }
its(:stdout) { eq 'true' }
end
# docker_container[ipc_mode]
describe command("docker ps -af 'name=ipc_mode$'") do
its(:exit_status) { should eq 0 }
its(:stdout) { should match(/Exited \(0\)/) }
end
describe command("docker inspect --format '{{ .HostConfig.IpcMode }}' ipc_mode") do
its(:exit_status) { should eq 0 }
its(:stdout) { eq 'host' }
end
# docker_container[uts_mode]
describe command("docker ps -af 'name=uts_mode$'") do
its(:exit_status) { should eq 0 }
its(:stdout) { should match(/Exited \(0\)/) }
end
describe command("docker inspect --format '{{ .HostConfig.UTSMode }}' uts_mode") do
its(:exit_status) { should eq 0 }
its(:stdout) { eq 'host' }
end
describe command("docker inspect --format '{{ .HostConfig.ReadonlyRootfs }}' ro_rootfs") do
its(:exit_status) { should eq 0 }
its(:stdout) { eq 'true' }
end
# sysctls
describe command("docker inspect --format '{{ .HostConfig.Sysctls }}' sysctls") do
its(:exit_status) { should eq 0 }
its(:stdout) { should match(/net.core.somaxconn:65535/) }
its(:stdout) { should match(/net.core.xfrm_acq_expires:42/) }
end
# cmd_change
describe command("docker inspect -f '{{ .Config.Cmd }}' cmd_change") do
its(:exit_status) { should eq 0 }
its(:stdout) { should match(/nc -ll -p 9/) }
end
# docker_container[memory]
describe command("docker inspect -f '{{ .HostConfig.KernelMemory }}' memory") do
its(:exit_status) { should eq 0 }
its(:stdout) { should match(/10485760/) }
end
describe command("docker inspect -f '{{ .HostConfig.Memory }}' memory") do
its(:exit_status) { should eq 0 }
its(:stdout) { should match(/5242880/) }
end
describe command("docker inspect -f '{{ .HostConfig.MemorySwap }}' memory") do
its(:exit_status) { should eq 0 }
its(:stdout) { should match(/5242880/) }
end
describe command("docker inspect -f '{{ .HostConfig.MemorySwappiness }}' memory") do
its(:exit_status) { should eq 0 }
its(:stdout) { should match(/50/) }
end
describe command("docker inspect -f '{{ .HostConfig.MemoryReservation }}' memory") do
its(:exit_status) { should eq 0 }
its(:stdout) { should match(/5242880/) }
end
describe command("docker inspect -f '{{ .HostConfig.ShmSize }}' memory") do
its(:exit_status) { should eq 0 }
its(:stdout) { should match(/67108864/) }
end

View File

@ -1,23 +0,0 @@
# service named 'default'
describe command('docker images') do
its(:exit_status) { should eq 0 }
its(:stdout) { should match(/busybox/) }
end
describe command('docker ps -a') do
its(:exit_status) { should eq 0 }
its(:stdout) { should match(/an_echo_server/) }
end
# service one
describe command('docker --host=unix:///var/run/docker-one.sock images') do
its(:exit_status) { should eq 0 }
its(:stdout) { should match(/^hello-world/) }
its(:stdout) { should_not match(/^alpine/) }
end
describe command('docker --host=unix:///var/run/docker-one.sock ps -a') do
its(:exit_status) { should eq 0 }
its(:stdout) { should match(/hello-world/) }
its(:stdout) { should_not match(/an_echo_server/) }
end

View File

@ -1,43 +0,0 @@
###########
# reference
###########
# https://docs.docker.com/engine/reference/commandline/volume_create/
###########
# remove_me
###########
describe command('docker volume ls -q') do
its(:exit_status) { should eq 0 }
its(:stdout) { should_not match(/^remove_me$/) }
end
#######
# hello
#######
describe command('docker volume ls -q') do
its(:exit_status) { should eq 0 }
its(:stdout) { should match(/^hello$/) }
its(:stdout) { should match(/^hello_again$/) }
end
##################
# hello containers
##################
describe command("docker ps -qaf 'name=file_writer$'") do
its(:exit_status) { should eq 0 }
its(:stdout) { should_not be_empty }
end
describe command("docker ps -qaf 'name=file_reader$'") do
its(:exit_status) { should eq 0 }
its(:stdout) { should_not be_empty }
end
describe command('docker logs file_reader') do
its(:exit_status) { should eq 0 }
its(:stdout) { should match(%r{/hello/sean_was_here}) }
end

View File

@ -0,0 +1 @@
default[:mysql][:root_password] = 'sploitme'

View File

@ -18,5 +18,6 @@ version '0.1.0'
#
# source_url 'https://github.com/<insert_org_here>/metasploitable3' if respond_to?(:source_url)
depends 'docker'
depends 'mysql'
depends 'apt', '~> 7.2'
depends 'docker', '~> 4.9'
depends 'mysql', '~> 8.3'

View File

@ -4,10 +4,6 @@
#
# Copyright:: 2017, Rapid7, All Rights Reserved.
execute 'apt-get update' do
command 'apt-get update'
end
package 'apache2' do
action :install
end
@ -59,8 +55,9 @@ execute 'make /var/www/html writeable' do
command 'chmod o+w /var/www/html'
end
execute 'rm /var/www/html/index.html' do
command 'rm /var/www/html/index.html'
file '/var/www/html/index.html' do
action :delete
only_if { File.exists?('/var/www/html/index.html') }
end
service 'apache2' do

View File

@ -4,10 +4,6 @@
#
# Copyright:: 2017, Rapid7, All Rights Reserved.
execute "apt-get update" do
command "apt-get update"
end
package 'openjdk-6-jre'
package 'openjdk-6-jdk'

View File

@ -25,6 +25,10 @@ end
execute 'unzip chatbot' do
command 'unzip /tmp/chatbot.zip -d /opt'
only_if { Dir['/opt/chatbot'].empty? }
notifies :run, 'execute[chown chatbot]', :immediately
notifies :run, 'execute[chmod chatbot]', :immediately
notifies :run, 'execute[install chatbot]', :immediately
end
execute 'chown chatbot' do
@ -37,6 +41,7 @@ end
execute 'install chatbot' do
command '/opt/chatbot/install.sh'
not_if { File.exists?( '/etc/init/chatbot.conf' ) }
end
service 'chatbot' do

View File

@ -4,10 +4,6 @@
#
# Copyright:: 2017, Rapid7, All Rights Reserved.
execute 'apt-get update' do
command 'apt-get update'
end
package 'cups' do
action :install
end

View File

@ -35,12 +35,16 @@ cookbook_file '/opt/docker/7_of_diamonds.zip' do
mode '0700'
end
bash 'build docker image for 7 of diamonds' do
code <<-EOH
cd /opt/docker
docker build -t "7_of_diamonds" .
docker run -dit --restart always --name 7_of_diamonds 7_of_diamonds
EOH
docker_image '7_of_diamonds' do
action :build_if_missing
source '/opt/docker/'
end
docker_container '7_of_diamonds' do
action :run_if_missing
restart_policy 'always'
tty true
open_stdin true
end
file '/opt/docker/7_of_diamonds.zip' do

View File

@ -4,10 +4,6 @@
#
# Copyright:: 2017, Rapid7, All Rights Reserved.
execute "apt-get update" do
command "apt-get update"
end
bash 'setup for knockd, used for flag' do
code_to_execute = ""
code_to_execute << "iptables -A FORWARD 1 -p tcp -m tcp --dport 8989 -j DROP\n"

View File

@ -4,13 +4,13 @@
#
# Copyright:: 2017, Rapid7, All Rights Reserved.
execute "apt-get update" do
command "apt-get update"
end
mysql_service 'default' do
initial_root_password 'sploitme'
initial_root_password "#{node[:mysql][:root_password]}"
bind_address '0.0.0.0'
port '3306'
action [:create, :start]
end
mysql_client 'default' do
action :create
end

View File

@ -5,13 +5,9 @@
# Copyright:: 2017, Rapid7, All Rights Reserved.
#
#
execute 'add nodejs 4 repository' do
command 'curl -sL https://deb.nodesource.com/setup_4.x | sudo -E bash -'
end
execute "apt-get update" do
command "apt-get update"
not_if { ::File.exist?('/usr/bin/node') }
end
package 'nodejs'

View File

@ -32,7 +32,7 @@ end
bash 'create payroll database and import data' do
code <<-EOH
mysql -S /var/run/mysql-default/mysqld.sock --user="root" --password="sploitme" --execute="CREATE DATABASE payroll;"
mysql -S /var/run/mysql-default/mysqld.sock --user="root" --password="sploitme" payroll < /tmp/payroll.sql
mysql -S /var/run/mysql-default/mysqld.sock --user="root" --password="#{node[:mysql][:root_password]}" --execute="DROP DATABASE IF EXISTS payroll; CREATE DATABASE payroll;"
mysql -S /var/run/mysql-default/mysqld.sock --user="root" --password="#{node[:mysql][:root_password]}" payroll < /tmp/payroll.sql
EOH
end

View File

@ -10,10 +10,6 @@ include_recipe 'metasploitable::apache'
php_tar = "php-5.4.5.tar.gz"
execute "apt-get update" do
command "apt-get update"
end
execute "install prereqs" do
command "apt-get install -y gcc make build-essential \
libxml2-dev libcurl4-openssl-dev libpcre3-dev libbz2-dev libjpeg-dev \
@ -29,30 +25,39 @@ end
remote_file "#{Chef::Config[:file_cache_path]}/#{php_tar}" do
source "#{node[:php545][:download_url]}/#{php_tar}"
mode '0644'
action :create_if_missing
not_if 'apache2ctl -M | grep -q php5'
end
remote_file "#{Chef::Config[:file_cache_path]}/libxml29_compat.patch" do
source "https://mail.gnome.org/archives/xml/2012-August/txtbgxGXAvz4N.txt"
mode '0644'
end
execute "extract php" do
cwd Chef::Config[:file_cache_path]
command "tar xvzf #{Chef::Config[:file_cache_path]}/#{php_tar} -C #{Chef::Config[:file_cache_path]}"
action :create_if_missing
not_if 'apache2ctl -M | grep -q php5'
end
execute "patch php" do
cwd "#{Chef::Config[:file_cache_path]}/php-5.4.5"
command "patch -p0 -b < ../libxml29_compat.patch"
action :nothing
end
execute "extract php" do
cwd Chef::Config[:file_cache_path]
command "tar xvzf #{Chef::Config[:file_cache_path]}/#{php_tar} -C #{Chef::Config[:file_cache_path]}"
only_if {Dir["#{Chef::Config[:file_cache_path]}/php-5.4.5"].empty?}
not_if 'apache2ctl -M | grep -q php5'
notifies :run, 'execute[patch php]', :immediately
end
bash "compile and install php" do
cwd "#{Chef::Config[:file_cache_path]}/php-5.4.5"
code <<-EOH
./configure --with-apxs2=/usr/bin/apxs --with-mysqli --enable-embedded-mysqli --with-gd --with-mcrypt --enable-mbstring --with-pdo-mysql
make
make install
./configure --with-apxs2=/usr/bin/apxs --with-mysqli --enable-embedded-mysqli --with-gd --with-mcrypt --enable-mbstring --with-pdo-mysql \
&& make && make install
EOH
not_if 'apache2ctl -M | grep -q php5'
end
cookbook_file 'etc/apache2/mods-available/php5.conf' do

View File

@ -14,6 +14,7 @@ bash "download and extract phpmyadmin" do
tar xvfz /tmp/phpMyAdmin-3.5.8-all-languages.tar.gz -C /var/www/html
mv /var/www/html/phpMyAdmin-3.5.8-all-languages /var/www/html/phpmyadmin
EOH
not_if { ::File.exists?('/var/www/html/phpmyadmin') }
end
cookbook_file 'var/www/html/phpmyadmin/config.inc.php' do

View File

@ -10,29 +10,35 @@ include_recipe 'metasploitable::apache'
proftpd_tar = 'proftpd-1.3.5.tar.gz'
remote_file "#{Chef::Config[:file_cache_path]}/#{proftpd_tar}" do
source "#{node[:proftpd][:download_url]}/#{proftpd_tar}"
mode '0644'
end
execute "extract proftpd" do
cwd Chef::Config[:file_cache_path]
command 'tar zxfv proftpd-1.3.5.tar.gz'
not_if { ::File.exists?(File.join(Chef::Config[:file_cache_path], 'proftpd-1.3.5'))}
action :nothing
end
bash 'compile and install proftpd' do
cwd "#{Chef::Config[:file_cache_path]}/proftpd-1.3.5"
code <<-EOH
./configure --prefix=/opt/proftpd --with-modules=mod_copy
make
make install
./configure --prefix=/opt/proftpd --with-modules=mod_copy \
&& make && make install
EOH
not_if { ::File.exist?( '/opt/proftpd/sbin/proftpd') }
action :nothing
end
remote_file "#{Chef::Config[:file_cache_path]}/#{proftpd_tar}" do
source "#{node[:proftpd][:download_url]}/#{proftpd_tar}"
mode '0644'
action :create_if_missing
not_if { File.exists?( '/opt/proftpd/sbin/proftpd' ) }
notifies :run, 'execute[extract proftpd]', :immediately
notifies :run, 'bash[compile and install proftpd]', :immediately
end
execute 'add hostname to /etc/hosts' do
command "echo #{node[:ipaddress]} #{node[:hostname]} >> /etc/hosts"
not_if 'grep -q "#{node[:ipaddress]} #{node[:hostname]}" /etc/hosts'
end
cookbook_file '/etc/init.d/proftpd' do

View File

@ -11,19 +11,17 @@ include_recipe 'metasploitable::nodejs'
package 'git'
git '/opt/readme_app' do
repository 'https://github.com/jbarnett-r7/metasploitable3-readme.git'
action :checkout
end
directory '/opt/readme_app' do
owner 'chewbacca'
group 'users'
mode '0644'
end
bash "clone the readme app and install gems" do
code <<-EOH
cd /opt/
git clone https://github.com/jbarnett-r7/metasploitable3-readme.git readme_app
EOH
end
template '/opt/readme_app/start.sh' do
source 'readme_app/start.sh.erb'
end
@ -34,11 +32,12 @@ cookbook_file '/etc/init/readme_app.conf' do
end
bash 'set permissions' do
cwd '/opt/readme_app'
code <<-EOH
chown -R chewbacca:users /opt/readme_app
find /opt/readme_app -type d | xargs chmod 0755
find /opt/readme_app -type f | xargs chmod 0644
chmod 0755 /opt/readme_app/start.sh
chown -R chewbacca:users .
git ls-files | xargs chmod 0644
git ls-files | xargs -n 1 dirname | uniq | xargs chmod 755
chmod 0755 ./start.sh
EOH
end

Some files were not shown because too many files have changed in this diff Show More