Pl/Container enables users to run Greenplum procedural language functions inside a Docker container, to avoid security risks associated with executing Python or R code on Greenplum segment hosts. This topic covers information about the architecture, installation, and setup of PL/Container:
- About the PL/Container Language Extension
- Upgrade PL/Container
- Uninstall PL/Container
- Docker References
For detailed information about using PL/Container, refer to:
The PL/Container language extension is available as an open source module. For information about the module, see the README file in the GitHub repository at https://github.com/greenplum-db/plcontainer
About the PL/Container Language Extension
The Greenplum Database PL/Container language extension allows users to create and run PL/Python or PL/R user-defined functions (UDFs) securely, inside a Docker container. Docker provides the ability to package and run an application in a loosely isolated environment called a container. For information about Docker, see the Docker web site.
Running UDFs inside the Docker container ensures that:
- The function execution process takes place in a separate environment and allows decoupling of the data processing. SQL operators such as "scan," "filter," and "project" are executed at the query executor (QE) side, and advanced data analysis is executed at the container side.
- User code cannot access the OS or the file system of the local host.
- User code cannot introduce any security risks.
- Functions cannot connect back to the Greenplum Database if the container is started with limited or no network access.
Example of the process flow:
Consider a query that selects table data using all available segments, and transforms the data using a PL/Container function. On the first call to a function in a segment container, the query executor on the master host starts the container on that segment host. It then contacts the running container to obtain the results. The container might respond with a Service Provider Interface (SPI) - a SQL query executed by the container to get some data back from the database - returning the result to the query executor.
A container running in standby mode waits on the socket and does not consume any CPU resources. PL/Container memory consumption depends on the amount of data cached in global dictionaries.
The container connection is closed by closing the Greenplum Database session that started the container, and the container shuts down.
About PL/Container 3 Beta
- Provides support for the new GreenplumR interface.
- Reduces the number of processes created by PL/Container, in order to save system resources.
- Supports more containers running concurrently.
- Includes improved log messages to help diagnose problems.
PL/Container 3 is currently a Beta feature, and provides only an Beta R Docker image for executing functions; Python images are not yet available. Save and uninstall any existing PL/Container software before you install PL/Container 3 Beta.
- Install Docker
- Install PL/Container
- Install the PL/Container Docker images
- Test the PL/Container installation
The following sections describe these tasks in detail.
- For PL/Container 2.1.x use Greenplum Database 6 on CentOS 7.x (or later), RHEL 7.x (or
later), or Ubuntu 18.04. Note: PL/Container 2.1.x supports Docker images with Python 3 installed.
- For PL/Container 3 Beta use Greenplum Database 6.1 or later on CentOS 7.x (or later), RHEL 7.x (or later), or Ubuntu 18.04.
- The minimum Linux OS kernel version supported is 3.10. To verfiy your kernel release
$ uname -r
- The minimum Docker versions on all hosts needs to be Docker 19.03.
To use PL/Container you need to install Docker on all Greenplum Database host systems. These instructions show how to set up the Docker service on CentOS 7 but RHEL 7 is a similar process.
These steps install the docker package and start the Docker service as a user with sudo privileges.
- Ensure the user has sudo privileges or is root.
- Install the dependencies required for
sudo yum install -y yum-utils device-mapper-persistent-data lvm2
- Add the Docker
sudo yum-config-manager --add-repo https://download.docker.com/linux/centos/docker-ce.repo
- Update yum cache:
sudo yum makecache fast
- Install Docker:
sudo yum -y install docker-ce
- Start Docker daemon:
sudo systemctl start docker
- On each Greenplum Database host, the gpadmin user should be part of
the docker group for the user to be able to manage Docker images and containers. Assign
the Greenplum Database administrator gpadmin to the group
sudo usermod -aG docker gpadmin
- Exit the session and login again to update the privileges.
- Configure Docker to start when the host system
sudo systemctl enable docker.service
sudo systemctl start docker.service
- Run a Docker command to test the Docker installation. This command lists the currently
running Docker containers.
- After you install Docker on all Greenplum Database hosts, restart the Greenplum
Database system to give Greenplum Database access to Docker.
Install PL/Container Docker Images
Install the Docker images that PL/Container will use to create language-specific containers to run the UDFs.
The PL/Container open source module contains dockerfiles to build Docker images that can be used with PL/Container. You can build a Docker image to run PL/Python UDFs and a Docker image to run PL/R UDFs. See the dockerfiles in the GitHub repository at https://github.com/greenplum-db/plcontainer.
- Download the files that contain the Docker images from the VMware Tanzu
Network. For example, for Greenplum 6.5, click on "PL/Container Docker Image
for Python 2.1.1" which downloads plcontainer-python-image-2.1.1-gp6.tar.gz with
Python 2.7.12 and the Python Data Science Module Package.
If you require different images from the ones provided by Pivotal Greenplum, you can create custom Docker images, install the image and add the image to the PL/ Container configuration.
- If you are using PL/Container 3 Beta, note that this Beta version is compatible only with the associated plcontainer-r-image-3.0.0-beta-gp6.tar.gz image.
Use the plcontainer image-add command to install an image on all Greenplum Database hosts. Provide the -f option to specify the file system location of a downloaded image file. For example:
# Install a Python 2 based Docker image plcontainer image-add -f /home/gpadmin/plcontainer-python-image-2.1.1-gp6.tar.gz # Install a Python 3 based Docker image plcontainer image-add -f /home/gpadmin/plcontainer-python3-image-2.1.1-gp6.tar.gz # Install an R based Docker image plcontainer image-add -f /home/gpadmin/plcontainer-r-image-2.1.1-gp6.tar.gz # Install the Beta R image for use with PL/Container 3.0.0 Beta plcontainer image-add -f /home/gpadmin/plcontainer-r-image-3.0.0-beta-gp6.tar.gz
The utility displays progress information, similar to:
20200127:21:54:43:004607 plcontainer:mdw:gpadmin-[INFO]:-Checking whether docker is installed on all hosts... 20200127:21:54:43:004607 plcontainer:mdw:gpadmin-[INFO]:-Distributing image file /home/gpadmin/plcontainer-python-images-1.5.0.tar to all hosts... 20200127:21:54:55:004607 plcontainer:mdw:gpadmin-[INFO]:-Loading image on all hosts... 20200127:21:55:37:004607 plcontainer:mdw:gpadmin-[INFO]:-Removing temporary image files on all hosts...
By default, the image-add command copies the image to each Greenplum Database segment and standby master host, and installs the image. When you specify the [-ulc | --use_local_copy] option, plcontainer installs the image only on the host on which you execute the command. Use this option when the PL/Container image already resides on disk on a host.
For more information on image-add options, visit the plcontainer reference page.
- To display the installed Docker images on the local host use:
$ plcontainer image-list
REPOSITORY TAG IMAGE ID CREATED pivotaldata/plcontainer_r_shared devel 7427f920669d 10 months ago pivotaldata/plcontainer_python_shared devel e36827eba53e 10 months ago pivotaldata/plcontainer_python3_shared devel y32827ebe55b 5 months ago
Add the image information to the PL/Container configuration file using plcontainer runtime-add, to allow PL/Container to associate containers with specified Docker images.
Use the -r option to specify your own user defined runtime ID name, use the -i option to specify the Docker image, and the -l option to specify the Docker image language. When there are multiple versions of the same docker image, for example 1.0.0 or 1.2.0, specify the TAG version using ":" after the image name.
# Add a Python 2 based runtime plcontainer runtime-add -r plc_python_shared -i pivotaldata/plcontainer_python_shared:devel -l python # Add a Python 3 based runtime that is supported with PL/Container 2.1.x plcontainer runtime-add -r plc_python3_shared -i pivotaldata/plcontainer_python3_shared:devel -l python3 # Add an R based runtime plcontainer runtime-add -r plc_r_shared -i pivotaldata/plcontainer_r_shared:devel -l rThe utility displays progress information as it updates the PL/Container configuration file on the Greenplum Database instances.
For details on other runtime-add options, see the plcontainer reference page.
- Optional: Use Greenplum Database resource groups to manage and limit the total CPU and
memory resources of containers in PL/Container runtimes. In this example, the Python
runtime will be used with a preconfigured resource group
plcontainer runtime-add -r plc_python_shared -i pivotaldata/plcontainer_python_shared:devel -l python -s resource_group_id=16391
For more information about enabling, configuring, and using Greenplum Database resource groups with PL/Container, see PL/Container Resource Management .
You can now create a simple function to test your PL/Container installation.
Test the PL/Container Installation
PL/Container Runtime Configuration: --------------------------------------------------------- Runtime ID: plc_python_shared Linked Docker Image: pivotaldata/plcontainer_python_shared:devel Runtime Setting(s): Shared Directory: ---- Shared Directory From HOST '/usr/local/greenplum-db/./bin/plcontainer_clients' to Container '/clientdir', access mode is 'ro' ---------------------------------------------------------
You can also view the PL/Container configuration information with the plcontainer runtime-show -r <runtime_id> command. You can view the PL/Container configuration XML file with the plcontainer runtime-edit command.
Use the psql utility and select an existing database:
If the PL/Container extension is not registered with the selected database, first enable it using:
postgres=# CREATE EXTENSION plcontainer;
Create a simple function to test your installation; in the example, the function will use the runtime plc_python_shared:
postgres=# CREATE FUNCTION dummyPython() RETURNS text AS $$ # container: plc_python_shared return 'hello from Python' $$ LANGUAGE plcontainer;
And test the function using:
postgres=# SELECT dummyPython(); dummypython ------------------- hello from Python (1 row)
postgres=# CREATE FUNCTION dummyR() RETURNS text AS $$ # container: plc_r_shared return ('hello from R') $$ LANGUAGE plcontainer; CREATE FUNCTION postgres=# select dummyR(); dummyr -------------- hello from R (1 row)
For further details and examples about using PL/Container functions, see PL/Container Functions .
To upgrade PL/Container, you save the current configuration, upgrade PL/Container, and then restore the configuration after upgrade. There is no need to update the Docker images when you upgrade PL/Container.
- Save the PL/Container configuration. For example, to save the configuration to a
file named plcontainer202-backup.xml in the local
$ plcontainer runtime-backup -f plcontainer202-backup.xml
- Use the Greenplum Database gppkg utility with the
-u option to update the PL/Container language extension. For
example, the following command updates the PL/Container language extension to version
2.1.1 on a Linux
$ gppkg -u plcontainer-2.1.1-gp6-rhel7_x86_64.gppkg
- Source the Greenplum Database environment file
$ source $GPHOME/greenplum_path.sh
- Restore the PL/Container configuration that you saved in a previous step:
$ plcontainer runtime-restore -f plcontainer202-backup.xml
- Restart Greenplum Database.
$ gpstop -ra
- You do not need to re-register the PL/Container extension in the databases in which
you previously created the extension but ensure that you register the PL/Container
extension in each new database that will run PL/Container UDFs. For example, the
following command registers PL/Container in a database named mytest:
$ psql -d mytest -c 'CREATE EXTENSION plcontainer;'
The command also creates PL/Container-specific functions and views.
To uninstall PL/Container, remove Docker containers and images, and then remove the PL/Container support from Greenplum Database.
When you remove support for PL/Container, the plcontainer user-defined functions that you created in the database will no longer work.
Uninstall Docker Containers and Images
On the Greenplum Database hosts, uninstall the Docker containers and images that are no longer required.
The plcontainer image-list command lists the Docker images that are installed on the local Greenplum Database host.
The plcontainer image-delete command deletes a specified Docker image from all Greenplum Database hosts.
- The command docker ps -a lists all containers on a host. The command docker stop stops a container.
- The command docker images lists the images on a host.
- The command docker rmi removes images.
- The command docker rm removes containers.
Remove PL/Container Support for a Database
To remove support for PL/Container, drop the extension from the database. Use the psql utility with DROP EXTENION command (using -c) to remove PL/Container from mytest database.
psql -d mytest -c 'DROP EXTENSION plcontainer CASCADE;'
The CASCADE keyword drops PL/Container-specific functions and views.
Remove PL/Container 3 Beta Shared Library
This step is required only if you have installed PL/Container 3 Beta. Before you remove the extension from your system with gppkg, remove the shared library configuration for the plc_coordinator process:
- Examine the shared_preload_libraries server configuration
$ gpconfig -s shared_preload_libraries
- If plc_coordinator is the only library listed, remove
the configuration parameter setting:
$ gpconfig -r shared_preload_libraries
Removing a server configuration parameter comments out the setting in the postgresql.conf file.
- If there are multiple libraries listed, remove
plc_coordinator from the list and re-set the configuration
parameter. For example, if shared_preload_libraries is set
$ gpconfig -c shared_preload_libraries -v 'diskquota'
- If plc_coordinator is the only library listed, remove the configuration parameter setting:
- Restart the Greenplum Database cluster:
$ gpstop -ra
- If a PL/Container Docker container exceeds the maximum allowed memory, it is terminated and an out of memory warning is displayed.
- PL/Container does not limit the Docker base device size, the size of the Docker
container. In some cases, the Docker daemon controls the base device size. For example,
if the Docker storage driver is devicemapper, the Docker daemon
--storage-opt option flag dm.basesize controls the
base device size. The default base device size for devicemapper is 10GB. The Docker
command docker info displays Docker system information including the
storage driver. The base device size is displayed in Docker 1.12 and later. For
information about Docker storage drivers, see the Docker information Daemon storage-driver.
When setting the Docker base device size, the size must be set on all Greenplum Database hosts.
Occasionally, when PL/Container is running in a high concurrency environment, the Docker daemon hangs with log entries that indicate a memory shortage. This can happen even when the system seems to have adequate free memory.
The issue seems to be triggered by the aggressive virtual memory requirement of the Go language (golang) runtime that is used by PL/Container, and the Greenplum Database Linux server kernel parameter setting for overcommit_memory. The parameter is set to 2 which does not allow memory overcommit.
A workaround that might help is to increase the amount of swap space and increase the Linux server kernel parameter overcommit_ratio. If the issue still occurs after the changes, there might be memory shortage. You should check free memory on the system and add more RAM if needed. You can also decrease the cluster load.
Docker home page https://www.docker.com/
Docker command line interface https://docs.docker.com/engine/reference/commandline/cli/
Dockerfile reference https://docs.docker.com/engine/reference/builder/
For CentOS, see Docker site installation instructions for CentOS.
For a list of Docker commands, see the Docker engine Run Reference.
Installing Docker on Linux systems https://docs.docker.com/engine/installation/linux/centos/
Control and configure Docker with systemd https://docs.docker.com/engine/admin/systemd/