For example, you can pull a specific version of ubuntu image $ docker pull ubuntu:18.04. To get a new Docker image you can either get it from a registry (such as the Docker Hub) or create your own. There are tens of thousands of images available on Docker Hub. You can also search for images directly from the command line using docker search. So I have a Nodejs app with a Github action pipeline to a private Dockerhub repo which creates an image once a push is made to main branch using below Dockerfile within the app directory FROM node.

Estimated reading time: 18 minutes

This page contains information about hosting your own registry using theopen source Docker Registry. For information about Docker Hub, which offers ahosted registry with additional features such as teams, organizations, webhooks, automated builds, etc, see Docker Hub.

Before you can deploy a registry, you need to install Docker on the host.A registry is an instance of the registry image, and runs within Docker.

This topic provides basic information about deploying and configuring aregistry. For an exhaustive list of configuration options, see theconfiguration reference.

If you have an air-gapped datacenter, seeConsiderations for air-gapped registries.

Run a local registry

Use a command like the following to start the registry container:

The registry is now ready to use.

Warning: These first few examples show registry configurations that areonly appropriate for testing. A production-ready registry must be protected byTLS and should ideally use an access-control mechanism. Keep reading and thencontinue to the configuration guide to deploy aproduction-ready registry.

Copy an image from Docker Hub to your registry

You can pull an image from Docker Hub and push it to your registry. Thefollowing example pulls the ubuntu:16.04 image from Docker Hub and re-tags itas my-ubuntu, then pushes it to the local registry. Finally, theubuntu:16.04 and my-ubuntu images are deleted locally and themy-ubuntu image is pulled from the local registry.

  1. Pull the ubuntu:16.04 image from Docker Hub.

  2. Tag the image as localhost:5000/my-ubuntu. This creates an additional tagfor the existing image. When the first part of the tag is a hostname andport, Docker interprets this as the location of a registry, when pushing.

  3. Push the image to the local registry running at localhost:5000:

  4. Remove the locally-cached ubuntu:16.04 and localhost:5000/my-ubuntuimages, so that you can test pulling the image from your registry. Thisdoes not remove the localhost:5000/my-ubuntu image from your registry.

  5. Pull the localhost:5000/my-ubuntu image from your local registry.

Stop a local registry

To stop the registry, use the same docker container stop command as with any othercontainer.

To remove the container, use docker container rm.

Basic configuration

To configure the container, you can pass additional or modified options to thedocker run command.

The following sections provide basic guidelines for configuring your registry.For more details, see the registry configuration reference.

Start the registry automatically

If you want to use the registry as part of your permanent infrastructure, youshould set it to restart automatically when Docker restarts or if it exits.This example uses the --restart always flag to set a restart policy for theregistry.

Customize the published port

If you are already using port 5000, or you want to run multiple localregistries to separate areas of concern, you can customize the registry’sport settings. This example runs the registry on port 5001 and also names itregistry-test. Remember, the first part of the -p value is the host portand the second part is the port within the container. Within the container, theregistry listens on port 5000 by default.

If you want to change the port the registry listens on within the container, youcan use the environment variable REGISTRY_HTTP_ADDR to change it. This commandcauses the registry to listen on port 5001 within the container:

Storage customization

Customize the storage location

By default, your registry data is persisted as a docker volumeon the host filesystem. If you want to store your registry contents at a specificlocation on your host filesystem, such as if you have an SSD or SAN mounted intoa particular directory, you might decide to use a bind mount instead. A bind mountis more dependent on the filesystem layout of the Docker host, but more performantin many situations. The following example bind-mounts the host directory/mnt/registry into the registry container at /var/lib/registry/.

Customize the storage back-end

By default, the registry stores its data on the local filesystem, whether youuse a bind mount or a volume. You can store the registry data in an Amazon S3bucket, Google Cloud Platform, or on another storage back-end by usingstorage drivers. For more information, seestorage configuration options.

Run an externally-accessible registry

Running a registry only accessible on localhost has limited usefulness. Inorder to make your registry accessible to external hosts, you must first secureit using TLS.

This example is extended in Run the registry as aservice below.

Get a certificate

These examples assume the following:

  • Your registry URL is https://myregistry.domain.com/.
  • Your DNS, routing, and firewall settings allow access to the registry’s hoston port 443.
  • You have already obtained a certificate from a certificate authority (CA).

If you have been issued an intermediate certificate instead, seeuse an intermediate certificate.

  1. Create a certs directory.

    Copy the .crt and .key files from the CA into the certs directory.The following steps assume that the files are named domain.crt anddomain.key.

  2. Stop the registry if it is currently running.

  3. Restart the registry, directing it to use the TLS certificate. This commandbind-mounts the certs/ directory into the container at /certs/, and setsenvironment variables that tell the container where to find the domain.crtand domain.key file. The registry runs on port 443, the default HTTPS port.

  4. Docker clients can now pull from and push to your registry using itsexternal address. The following commands demonstrate this:

Use an intermediate certificate

A certificate issuer may supply you with an intermediate certificate. In thiscase, you must concatenate your certificate with the intermediate certificate toform a certificate bundle. You can do this using the cat command:

You can use the certificate bundle just as you use the domain.crt file inthe previous example.

Support for Let’s Encrypt

The registry supports using Let’s Encrypt to automatically obtain abrowser-trusted certificate. For more information on Let’s Encrypt, seehttps://letsencrypt.org/how-it-works/and the relevant section of theregistry configuration.

Use an insecure registry (testing only)

It is possible to use a self-signed certificate, or to use our registryinsecurely. Unless you have set up verification for your self-signedcertificate, this is for testing only. See run an insecure registry.

Run the registry as a service

Swarm services provide several advantages overstandalone containers. They use a declarative model, which means that you definethe desired state and Docker works to keep your service in that state. Servicesprovide automatic load balancing scaling, and the ability to control thedistribution of your service, among other advantages. Services also allow you tostore sensitive data such as TLS certificates insecrets.

The storage back-end you use determines whether you use a fully scaled serviceor a service with either only a single node or a node constraint.

  • If you use a distributed storage driver, such as Amazon S3, you can use afully replicated service. Each worker can write to the storage back-endwithout causing write conflicts.

  • If you use a local bind mount or volume, each worker node writes to itsown storage location, which means that each registry contains a differentdata set. You can solve this problem by using a single-replica service and anode constraint to ensure that only a single worker is writing to the bindmount.

The following example starts a registry as a single-replica service, which isaccessible on any swarm node on port 80. It assumes you are using the sameTLS certificates as in the previous examples.

First, save the TLS certificate and key as secrets:

Move Docker Image Location Ubuntu

Next, add a label to the node where you want to run the registry.To get the node’s name, use docker node ls. Substitute your node’s name fornode1 below.

Next, create the service, granting it access to the two secrets and constrainingit to only run on nodes with the label registry=true. Besides the constraint,you are also specifying that only a single replica should run at a time. Theexample bind-mounts /mnt/registry on the swarm node to /var/lib/registry/within the container. Bind mounts rely on the pre-existing source directory,so be sure /mnt/registry exists on node1. You might need to create it beforerunning the following docker service create command.

By default, secrets are mounted into a service at /run/secrets/<secret-name>.

Ubuntu Docker Image Location

You can access the service on port 443 of any swarm node. Docker sends therequests to the node which is running the service.

Load balancing considerations

One may want to use a load balancer to distribute load, terminate TLS orprovide high availability. While a full load balancing setup is outside thescope of this document, there are a few considerations that can make the processsmoother.

The most important aspect is that a load balanced cluster of registries mustshare the same resources. For the current version of the registry, this meansthe following must be the same:

  • Storage Driver
  • HTTP Secret
  • Redis Cache (if configured)

Differences in any of the above cause problems serving requests.As an example, if you’re using the filesystem driver, all registry instancesmust have access to the same filesystem root, onthe same machine. For other drivers, such as S3 or Azure, they should beaccessing the same resource and share an identical configuration.The HTTP Secret coordinates uploads, so also must be the same acrossinstances. Configuring different redis instances works (at the timeof writing), but is not optimal if the instances are not shared, becausemore requests are directed to the backend.

Important/Required HTTP-Headers

Getting the headers correct is very important. For all responses to anyrequest under the “/v2/” url space, the Docker-Distribution-API-Versionheader should be set to the value “registry/2.0”, even for a 4xx response.This header allows the docker engine to quickly resolve authentication realmsand fallback to version 1 registries, if necessary. Confirming this is setupcorrectly can help avoid problems with fallback.

In the same train of thought, you must make sure you are properly sending theX-Forwarded-Proto, X-Forwarded-For, and Host headers to their “client-side”values. Failure to do so usually makes the registry issue redirects to internalhostnames or downgrading from https to http.

A properly secured registry should return 401 when the “/v2/” endpoint is hitwithout credentials. The response should include a WWW-Authenticatechallenge, providing guidance on how to authenticate, such as with basic author a token service. If the load balancer has health checks, it is recommendedto configure it to consider a 401 response as healthy and any other as down.This secures your registry by ensuring that configuration problems withauthentication don’t accidentally expose an unprotected registry. If you’reusing a less sophisticated load balancer, such as Amazon’s Elastic LoadBalancer, that doesn’t allow one to change the healthy response code, healthchecks can be directed at “/”, which always returns a 200 OK response.

Restricting access

Except for registries running on secure local networks, registries should alwaysimplement access restrictions.

Image

Native basic auth

The simplest way to achieve access restriction is through basic authentication(this is very similar to other web servers’ basic authentication mechanism).This example uses native basic authentication using htpasswd to store thesecrets.

Warning:You cannot use authentication with authentication schemes that sendcredentials as clear text. You mustconfigure TLS first forauthentication to work.

  1. Create a password file with one entry for the user testuser, with passwordtestpassword:

  2. Stop the registry.

  3. Start the registry with basic authentication.

  4. Try to pull an image from the registry, or push an image to the registry.These commands fail.

  5. Log in to the registry.

    Provide the username and password from the first step.

    Test that you can now pull an image from the registry or push an image tothe registry.

X509 errors: X509 errors usually indicate that you are attempting to usea self-signed certificate without configuring the Docker daemon correctly.See run an insecure registry.

More advanced authentication

You may want to leverage more advanced basic auth implementations by using aproxy in front of the registry. See the recipes list.

The registry also supports delegated authentication which redirects users to aspecific trusted token server. This approach is more complicated to set up, andonly makes sense if you need to fully configure ACLs and need more control overthe registry’s integration into your global authorization and authenticationsystems. Refer to the following background information andconfiguration information here.

This approach requires you to implement your own authentication system orleverage a third-party implementation.

Deploy your registry using a Compose file

If your registry invocation is advanced, it may be easier to use a Dockercompose file to deploy it, rather than relying on a specific docker runinvocation. Use the following example docker-compose.yml as a template.

Replace /path with the directory which contains the certs/ and auth/directories.

Start your registry by issuing the following command in the directory containingthe docker-compose.yml file:

Considerations for air-gapped registries

You can run a registry in an environment with no internet connectivity.However, if you rely on any images which are not local, you need to consider thefollowing:

  • You may need to build your local registry’s data volume on a connectedhost where you can run docker pull to get any images which are availableremotely, and then migrate the registry’s data volume to the air-gappednetwork.

  • Certain images, such as the official Microsoft Windows base images, are notdistributable. This means that when you push an image based on one of theseimages to your private registry, the non-distributable layers are notpushed, but are always fetched from their authorized location. This is finefor internet-connected hosts, but not in an air-gapped set-up.

    You can configure the Docker daemon to allow pushing non-distributable layers to private registries.This is only useful in air-gapped set-ups in the presence ofnon-distributable images, or in extremely bandwidth-limited situations.You are responsible for ensuring that you are in compliance with the terms ofuse for non-distributable layers.

    1. Edit the daemon.json file, which is located in /etc/docker/ on Linuxhosts and C:ProgramDatadockerconfigdaemon.json on Windows Server.Assuming the file was previously empty, add the following contents:

      The value is an array of registry addresses, separated by commas.

      Save and exit the file.

    2. Restart Docker.

    3. Restart the registry if it does not start automatically.

    4. When you push images to the registries in the list, theirnon-distributable layers are pushed to the registry.

      Warning: Non-distributable artifacts typically have restrictions onhow and where they can be distributed and shared. Only use this featureto push artifacts to private registries and ensure that you are incompliance with any terms that cover redistributing non-distributableartifacts.

Next steps

Ubuntu Docker Image Location Settings

More specific and advanced information is available in the following sections:

registry, on-prem, images, tags, repository, distribution, deployment

Estimated reading time: 4 minutes

The following development patterns have proven to be helpful for peoplebuilding applications with Docker. If you have discovered something we shouldadd,let us know.

Ubuntu Docker Image Location

How to keep your images small

Small images are faster to pull over the network and faster to load intomemory when starting containers or services. There are a few rules of thumb tokeep image size small:

  • Start with an appropriate base image. For instance, if you need a JDK,consider basing your image on the official openjdk image, rather thanstarting with a generic ubuntu image and installing openjdk as part of theDockerfile.

  • Use multistage builds. Forinstance, you can use the maven image to build your Java application, thenreset to the tomcat image and copy the Java artifacts into the correctlocation to deploy your app, all in the same Dockerfile. This means that yourfinal image doesn’t include all of the libraries and dependencies pulled in bythe build, but only the artifacts and the environment needed to run them.

    • If you need to use a version of Docker that does not include multistagebuilds, try to reduce the number of layers in your image by minimizing thenumber of separate RUN commands in your Dockerfile. You can do this byconsolidating multiple commands into a single RUN line and using yourshell’s mechanisms to combine them together. Consider the following twofragments. The first creates two layers in the image, while the secondonly creates one.

  • If you have multiple images with a lot in common, consider creating your ownbase image with the sharedcomponents, and basing your unique images on that. Docker only needs to loadthe common layers once, and they are cached. This means that yourderivative images use memory on the Docker host more efficiently and load morequickly.

  • To keep your production image lean but allow for debugging, consider using theproduction image as the base image for the debug image. Additional testing ordebugging tooling can be added on top of the production image.

  • When building images, always tag them with useful tags which codify versioninformation, intended destination (prod or test, for instance), stability,or other information that is useful when deploying the application indifferent environments. Do not rely on the automatically-created latest tag.

Where and how to persist application data

  • Avoid storing application data in your container’s writable layer usingstorage drivers. This increases thesize of your container and is less efficient from an I/O perspective thanusing volumes or bind mounts.
  • Instead, store data using volumes.
  • One case where it is appropriate to usebind mounts is during development,when you may want to mount your source directory or a binary you just builtinto your container. For production, use a volume instead, mounting it intothe same location as you mounted a bind mount during development.
  • For production, use secrets to store sensitiveapplication data used by services, and use configsfor non-sensitive data such as configuration files. If you currently usestandalone containers, consider migrating to use single-replica services, sothat you can take advantage of these service-only features.

Use CI/CD for testing and deployment

  • When you check in a change to source control or create a pull request, useDocker Hub oranother CI/CD pipeline to automatically build and tag a Docker image and testit.

  • Take this even further by requiring your development, testing, andsecurity teams to sign imagesbefore they are deployed into production. This way, before an image isdeployed into production, it has been tested and signed off by, for instance,development, quality, and security teams.

Differences in development and production environments

Ubuntu Docker Image Location Windows 10

DevelopmentProduction
Use bind mounts to give your container access to your source code.Use volumes to store container data.
Use Docker Desktop for Mac or Docker Desktop for Windows.Use Docker Engine, if possible with userns mapping for greater isolation of Docker processes from host processes.
Don’t worry about time drift.Always run an NTP client on the Docker host and within each container process and sync them all to the same NTP server. If you use swarm services, also ensure that each Docker node syncs its clocks to the same time source as the containers.

Ubuntu 18.04 Docker Image Location

application, development