Hello. In this lesson, we will talk about setting up profiles on the Apigee platform. In the previous lesson, you saw how to bootstrap the hosts in your Apigee planet. The bootstrapping process configures access to your Apigee software repository, and installs the basic utilities you will need to continue the installation. This lesson provides an overview of the next step, the profile setup process. Note that the profiles we will be setting up are strictly for Apigee Edge components. We will be setting up developer portal profiles in a later lesson. The profile setup process configures the services on each host, which are needed to fulfill one or more roles in the cluster. Those services are grouped into profiles, which are used to direct the actions of the set up program. In addition to a profile name, the setup program requires a text response file, which is populated with values that can vary based on the profile being applied. We will discuss profiles and response files in detail in a moment, but first, let's pause to discuss the installation topology we will use for the rest of the course. This diagram shows which services we will be installing on each host. If any of the services shown here are unfamiliar to you, pause this lesson while you review the Apigee system architecture detailed in the last module, Fundamentals. Our planet will consist of six hosts. Recall that open source components are shown in green, and Apigee proprietary components are shown in blue. You can see that the first three hosts have Zookeeper and Cassandra, which we refer to as data stores. The first host has OpenLDAP, the management server, and the enterprise UI, which comprise our management stack. On hosts two and three, you will find routers and message processors, which together, operate as the API gateway for processing API requests. Hosts four and five contain the analytics backend, which consists of Qpid, PostgreSQL, Qpid server and Postgres server. The PostgreSQL instance on host four operates as the master database, while the instance on host five is the standby. Finally, host six will have the developer portal and its database. Now, lets map those services to the profiles we will feed into the setup program along with the order in which they will be applied. The ordering described here will apply to any planet, no matter how many hosts or services are involved. The first step is to build all data storage services. We will perform that step on hosts one through three using the DS profile. The second step is to build all management stack services. We will apply the MS profile to host one to configure those components. Next, we will install gateway services. Since the routers and message processors are combined on hosts two and three, we can use the RMP profile to install both components at the same time. The fourth step is to install analytic services. As with the routers and message processors, we have a special profile named SAX, or standalone analytics, that installs all components at the same time. If we had separate Qpid and PostgreSQL hosts, we would use the QS and PS profiles, respectively. Finally, we will install developer services using the PDB and DP profiles. The PDB profile installs PostgreSQL without an accompanying Postgre server instance, and the DP profile installs the developer portal front-end. The PostgreSQL instance is separate from the analytics PostgreSQL master and standby. It will be used exclusively to store data for the developer portal. This table shows the common profiles you might use in a typical Apigee Planet installation. While we won't use them in the demo, here are a few useful profiles you might use when installing your own cluster. The r and mp profiles allow you to independently install routers and message processors in situations where they reside on separate hosts. In particular, deployments that place routers in a DMZ usually use these profiles. The sa profile places all gateway and management services components on a single host. Combined with the SAX profile I mentioned earlier, this is good for a demo or development deployment onto your hosts. The aio, or all-in-one profile, stacks all components on a single host. This can be useful for a proof of concept or to deploy a fully functional Apigee planet on a developer machine. The two inputs to the setup program are the profile name and a response file. Now that we know which profiles we will apply to each host, let's look at the response file we will use. A response file is simply a flat text file with a number of Bash variables that are used by the setup program. We will use separate response files for several different tasks during installation, setting up Apigee Edge hosts, organization and environment onboarding, setting up developer portal hosts, and performing platform validation. For now, let's just focus on the response file used to set up Edge hosts. Except in certain rare cases, you can use the same response file across all Edge hosts in each region, and only alter the settings on a per-region basis. The most common reason to have different response files within a region is if you have multiple management servers with different LDAP replication settings in the same region. For our demo, we will use a single response file as shown here. By convention, a block of IP addresses or fully qualified domain names is placed at the top. Remember that the response file is simply a file containing Bash variables, so typical variable substitution rules apply. Host IP should always contain the IP address at which the host is reachable by the rest of the cluster. The easiest way to do this is to use the hostname -i command, as long as that command returns a single address. MSIP sets the address of the local management server in this region. ADMIN_EMAIL and APIGEE_ADMINPW set the username and password of the initial sysadmin user for the planet. LICENSE_FILE is the path to the license file that you were provided. It must exist on the local file system of each management server at installation time. In our demo, this file will be located at /temp/apigee/license.txt. The USE_LDAP_REMOTE_HOST variable should almost always be set as shown. If you want to separate the Apigee internal OpenLDAP instance from the rest of the management stack, you can set this to y. LDAP type can be set to one, for non-replicated LDAP, and two for replicated LDAP. If you set it to two, you will also need to set a couple other LDAP variables describing the settings for the replication peer as described in the install guide. APIGEE_LDAPPW sets the OpenLDAP route dn password. MP_POD should be left as gateway, unless you are customizing your gateway pod configuration. REGION will always be prefixed with dc- followed by an integer value starting at one, and counting up for each subsequent region. ZK_HOSTS contains a full list of all Zookeeper hosts in the planet. For planets with more than one region, you may wish to use special syntax to designate observer nodes. Check the install guide for full details on that. ZK_CLIENT_HOSTS is a list of the Zookeeper hosts in this region, excluding any hosts in other regions. No special syntax for observers should be included here. CASS_HOSTS describes all Cassandra hosts in the planet. An important detail to understand here is that all Cassandra hosts in the current region need to be listed first, followed by Cassandra hosts in other regions. This variable will differ based on the region you are installing. PG_MASTER and PG_STANDBY define the master and standby hosts for the analytics databases, and PG_PWD will set the password for the analytics database. SKIP_SMTP is a flag value that will instruct the installer to configure SMTP if set to n. The remaining SMTP settings configure the system to talk to your mail relay, if one is available. BIND_ON_ALL_INTERFACES controls whether routers and message processors bind on all interfaces, or just the interface to which the host IP variable address is assigned. This should normally be set as shown. There are other variables that you can set to control the behavior of the setup program. If you are deploying an Apigee planet with multiple regions, you will likely want to include settings for LDAP replication, and you will need to alter the contents of the ZK_HOSTS, ZK_CLIENT_HOSTS, and CASS_HOSTS variables. For full details and examples of response file settings, follow the links shown here. We have our list of profiles and we have a valid response file. The next step is to run the setup program on each host, passing in the response file and profile as appropriate based on that host's role. The syntax of the setup command is setup.sh -f, the path to your response file, -p, and the profile you wish to apply. In the next module, we will see this process in action. For more information on this topic, refer to our documentation. If you have any questions, please post them on our community. Thanks for watching.