Services
Configuring the Docker services that work together to deliver your product.
Overview
Services run as containers in your cluster that are orchestrated by Docker. There are two types of services that are orchestrated in MedStack Control.
- Managed services
- Custom services
Types of Services
Managed services
MedStack Control's managed services can be added to your Docker environment to leverage preconfigured services that offer important functionality and enforce compliance requirements.
In the product today, MedStack Control's managed service encompasses the load balancer, which is required to make your services available to the open internet.
Custom Services
Custom services will encompass all other services you intend on deploying to Docker. You'll create a custom service when looking to deploy a service from a container image you've built or is publicly available. This could include:
- Container images hosted on a private image registry, which will require registry credentials and added to the Docker configuration.
- Container images hosted on a public registry.
- Container images hosted on a public registry marketplace like Docker Hub.
Service Information
An overview of the service configuration. You can click the Update button to see and modify the complete service configuration. The replica count indicates the number of containers of the service to spin up.
Service Containers
Containers are service instances that run in your cluster. They are the result of successfully configuring a service to operate on your cluster.
You'll learn more about what you can do with containers in the next section of the guide on maintaining your applications.
Tasks / History
The state of containers, from their request to be created through their destruction, is captured in this table. Error messages contain the description for a container creation or runtime failure, and are also available in the logs for containers that have not yet been purged from the Docker environment.
You'll learn more about container logs in the guide on maintaining your applications.
Image Update Webhooks
When a webhook is called, Docker stops and restarts all service containers for the service which the webhook has been generated. Webhooks can be enabled or disabled, and there can be many webhooks for a single service to be used in different workflows and pipelines if desired.
Formatting a request
A webhook can be triggered by using a HTTP POST request. The body in curly braces is not used, but is added to the request for completion.
// Example 1: POST by default
curl --data `{}` $URL
// Example 2: POST explicit
curl -X POST $URL
// Success
{"warnings": null}
A successful response upon calling the webhook comes from Docker Swarm and reports no warnings.
Webhooks in CI/CD
When a service with the tag latest
is started, the latest tag of that image in the registry will be pulled from the registry for deployment. The latest
tag is also implied when no image tag is specified in the service configuration.
// Example 1: Private registry on GitLab
registry.gitlab.com/medstack-inc/flask-demo:latest
// Example 2: Docker Hub Marketplace container image
busybox:latest
// Example 3: Docker Hub Marketplace container image
busybox
In a simple CI/CD pipeline, you can use webhooks to stop and start a service after the image has been built and published to the container image registry. This will force Docker on MedStack Control to shut down the service and restart it with the latest image of the service in the registry.
For more information and helpful tips on managing your CI/CD, please review our Ebook on CI/CD with MedStack Control.
A common deployment pipeline
It's common for teams to set up CI/CD in this way when they use Github for source control and Docker Hub as their container image registry.
There is a Github Action to build and push Docker images that can be used in sequence with a service's webhook to set up a simple deployment pipeline.
Deployment mechanism
The default Swarm configuration for updating services is stop-start
. This is the default configuration to mitigate complications that can occur when different versions of the same service are running at the same time.
You can expect the service downtime to elapse the duration of:
- shutting down the service,
- downloading the latest image to Docker, and
- starting the service.
The time required to do this varies for each service.
In the case of deployment failure
After three failed attempts, the container will not attempt to deploy again. This means that there will be one less replica than configured in the service. To resolve this, update the service with a deployable configuration.
Actions
Create
In the Services tab you can click the New service button to configure and deploy a service.
Select to deploy a managed service or a custom service.
Update
-
A service can be updated from the service's details page. This can be accessed in a few ways:
- Click the container icon on the Control cluster overview page.
- Click the service name in the Services tab for a cluster.
- Click the View button in the Services tab for a cluster.
- In the service details page, click the Update button to edit the service configuration.
- Click Save at the bottom of the configuration form to save and roll out the new configuration.
Delete
A service can be deleted from the service's details page or the Services tab.
Would you rather just pause a service?
Sometimes it's preferable to simply pause a service by stopping its containers while preserving the service configuration in Docker. This can be done by updating the service configuration to have zero replicas.
Configuration
General
The general information about a service.
Field | Description |
---|---|
Name | The name for the service in the Docker network. Services can communicate internally using their name. |
Image | The domain and path of the container image in the registry, e.g., registry.gitlab.com/medstack-inc/flask-demo:1.2 You may also deploy public Docker Hub market place container images by specifying the image and tag, e.g., rails:latest |
Replicas | The number of containers to deploy of the service. |
Advanced optionscommand -arguments [options] | Clicking advanced options exposes input boxes to issue a command and its arguments to execute at run time. This overrides the commands executed in the Dockerfile built into the container image. |
Domain Mapping
The ingress and internal networking information for a service.
Field | Description |
---|---|
Domain | The domain at which service will serve requests. Multiple domains are supported using comma separated values, e.g., app1.test.com,app2.test.com |
Internal port | The internal port the Docker container has been configured to listen on. This is often defined in the Dockerfile as the EXPOSE command. |
Load Balancer Healthchecks
Load balancer healthcheck parameters can be configured to improve the availability of Docker services and decrease the disruption to application clients when creating and updating services.
These kinds of healthchecks allow the load balancer to determine if a container is suitable for receiving traffic. The healthcheck will deem a container suitable for receiving traffic if the path responds with a 2xx
or 3xx
HTTP response code. You can learn more about Traefik healthchecks.
If all containers for a service fail their healthchecks, the load balancer will return a 503: Service Unavailable
for any inbound traffic to the service.
Enabling a healthcheck
A healthcheck can be enabled by inputting a value into the "Path" field. The path is the only required field when configuring a load balancer healthcheck as the interval and timeout defaults will be assumed unless configured otherwise.
Field | Description |
---|---|
Path | Defines the server URL path for the health check endpoint. (i.e., /my-healthcheck-endpoint) |
Interval (seconds) | Defines the frequency of the healthcheck calls. (default: 30s) |
Timeout (seconds) | Defines the maximum duration the load balancer will wait for a healthcheck request before considering the container unsuitable for receiving traffic. (default: 5s) |
Environment Variables
You can add up to 20 environment variables to a service in the form of key-value pairs.
Configs
Configs are created in the Docker configuration and mapped to be used by a service. When a config mapping is created for a service, the config is mounted as a file at a specified path on the container's host node's disk.
You can see in this example in the container shell how the config_db.json file was created at the root directory on the disk.
Root UID / GUID
Use the default value
0
on UID / GID fields to assignroot
user / group
Secrets
Secrets are created in the Docker configuration and can be exposed to a service. When a secret mapping is created for a service, the secret is mounted as an encrypted file at /var/run/secrets
on the container's host node's disk using the filename specified.
You can see in this example in the container shell how the FB_TOKEN.secret file was created at the /var/run/secrets path on the disk.
Volumes
Volumes are created in the Docker configuration and can be mounted to a service. When a volume mapping is created for a service, the volume data is mounted to a specified path on the container's host node's disk.
The physical storage state data will exist on the node where the first container was deployed.
You can see in this example in the container shell how the /my-photos path was created at the root directory and mounted the volume data on the disk.
Placement Constraints
Placement constraints introduce conditions for a container to determine on which nodes to run. Two common examples of placement constraints are:
- Labels, e.g.
node.labels.{key} == {value}
using the==
comparator. - Labels, e.g.
node.role != {role}
using the!=
comparator. Roles can beworker
ormanager
.
To get a better understanding of the types of conditions that can be used, you can read more about how to use placement constraints on the official Docker documentation.
Use placement constraints with stateful services
Because Docker Swarm does not currently support persistent volume claims, we recommend using stringent placement constraints to pin stateful services, such as a cache or database, to specific machines.
Update Strategy
The update strategy defines the behaviour new and existing containers when updating services. You can learn more about Docker update options here. Configuring the update strategy can improve the availability of Docker services and decrease the disruption to application clients when updating services.
Field | Description |
---|---|
Order | Defines whether the new container starts first, or the old container stops first. (default: stop-first) |
Parallelism | Defines the maximum number of containers updated simultaneously. Setting this value to 0 will update all containers at once. (default: 1) |
Delay (seconds) | Defines the time delay between rolling out service updates processed in batch by what is configured in the parallelism option. (default: 0s) |
Delay does not consider healthcheck intervals
When using load balancer healthchecks and defining an update strategy, it's important to note that the update strategy delay does not wait for the healthcheck to pass before continuing to rollout the service update.
It is recommended that the delay is long enough to allow for the containers to pass their load balancer healthchecks. Configuring this correctly can reduce downtime for clients when running many service replicas.
Updated 22 days ago
Now that you have services running, you'll need to maintain and prepare them for production.