This topic describes the Greenplum Database 6 platform and operating system software requirements.
Greenplum 6 runs on the following operating system platforms:
- Red Hat Enterprise Linux 64-bit 7.x (See the following Note.)
- Red Hat Enterprise Linux 64-bit 6.x
- CentOS 64-bit 7.x
- CentOS 64-bit 6.x
- Ubuntu 18.04 LTS
- Oracle Linux 64-bit 7, using the Red Hat Compatible Kernel (RHCK)
If you use RedHat 6 and the performance with resource groups is acceptable for your use case, upgrade your kernel to version 2.6.32-696 or higher to benefit from other fixes to the cgroups implementation.
RHEL 7.3 and CentOS 7.3 resolves the issue.
Greenplum Database server supports TLS version 1.2 on RHEL/CentOS systems, and TLS version 1.3 on Ubuntu systems.
- libevent (or libevent2 on RHEL/CentOS 6)
- openssl-libs (RHEL7/Centos7)
- sed (used by gpinitsystem)
Greenplum Database 6 uses Python 2.7.12, which is included with the product installation (and not installed as a package dependency).
- Open JDK 8 or Open JDK 11, available from AdoptOpenJDK
- Oracle JDK 8 or Oracle JDK 11
Hardware and Network
The following table lists minimum recommended specifications for hardware servers intended to support Greenplum Database on Linux systems. All host servers in your Greenplum Database system must have the same hardware and software configuration.
|Minimum CPU||Any x86_64 compatible CPU|
|Minimum Memory||16 GB RAM per server|
|Disk Space Requirements||
|Network Requirements||10 Gigabit Ethernet within the array
NIC bonding is recommended when multiple interfaces are present
Pivotal Greenplum can use either IPV4 or IPV6 protocols.
You should run Greenplum Database on an XFS file system.
Greenplum Database can run on on network or shared storage if the shared storage is presented as a block device to the servers running Greenplum Database and the XFS file system is mounted on the block device. Network file systems are not recommended. When using network or shared storage, Greenplum Database mirroring must be used in the same way as with local storage, and no modifications should be made to the mirroring scheme or the recovery scheme of the segments.
Other features of the shared storage such as de-duplication and/or replication can be used with Greenplum Database as long as they do not interfere with the expected operation of Greenplum Database.
Greenplum Database can be deployed to virtualized systems only if the storage is presented as block devices and the XFS file system is mounted for the storage of the segment directories.
Greenplum Database can run on Amazon Web Services (AWS) servers using either Amazon instance store (Amazon uses the volume names ephemeral[0-20]) or Amazon Elastic Block Store (Amazon EBS) storage. If using Amazon EBS storage the storage should be RAID of Amazon EBS volumes and mounted with the XFS file system.
Greenplum Database provides access to HDFS with the Greenplum Platform Extension Framework (PXF). PXF v5.15.0 is integrated with Greenplum Database 6, and provides access to Hadoop, object store, and SQL external data stores. Refer to Accessing External Data with PXF in the Greenplum Database Administrator Guide for PXF configuration and usage information.
PXF can use Cloudera, Hortonworks Data Platform, MapR, and generic Apache Hadoop distributions. PXF bundles all of the JAR files on which it depends, including the following Hadoop libraries:
|PXF Version||Hadoop Version||Hive Server Version||HBase Server Version|
|5.15.0, 5.14.0, 5.13.0, 5.12.0, 5.11.1, 5.10.1||2.x, 3.1+||1.x, 2.x, 3.1+||1.3.2|