| copyright |
|
||
|---|---|---|---|
| lastupdated | 2022-01-24 | ||
| keywords | kubernetes, openshift, red hat, red hat openshift | ||
| subcollection | openshift | ||
| content-type | tutorial | ||
| services | openshift | ||
| account-plan | |||
| completion-time | 45m |
{{site.data.keyword.attribute-definition-list}}
{: #openshift_tutorial} {: toc-content-type="tutorial"} {: toc-services="openshift"} {: toc-completion-time="45m"}
Create a cluster with worker nodes that come installed with {{site.data.keyword.openshiftshort}} container orchestration platform. {: shortdesc}
With {{site.data.keyword.openshiftlong}}, you can create highly available clusters with virtual or bare metal worker nodes that come installed with the {{site.data.keyword.openshiftlong_notm}} Container Platform orchestration software. You get all the advantages of a managed offering for your cluster infrastructure environment, while using the {{site.data.keyword.openshiftshort}} tooling and catalog{: external} that runs on Red Hat Enterprise Linux for your app deployments.
{{site.data.keyword.openshiftshort}} worker nodes are available for paid accounts and standard clusters only. In this tutorial, you create a cluster that runs version 4.8. The operating system is Red Hat Enterprise Linux 7. {: note}
{: #openshift_objectives}
In the tutorial lessons, you create a standard {{site.data.keyword.openshiftlong_notm}} cluster, open the {{site.data.keyword.openshiftshort}} console, access built-in {{site.data.keyword.openshiftshort}} components, deploy an app in an {{site.data.keyword.openshiftshort}} project, and expose the app on an {{site.data.keyword.openshiftshort}} route so that external users can access the service. {: shortdesc}
{: #openshift_audience}
This tutorial is for cluster administrators who want to learn how to create a {{site.data.keyword.openshiftlong_notm}} cluster for the first time by using the CLI. {: shortdesc}
{: #openshift_prereqs}
Complete the following prerequisite steps to set up permissions and the command-line environment. {: shortdesc}
Permissions: If you are the account owner, you already have the required permissions to create a cluster and can continue to the next step. Otherwise, ask the account owner to set up the API key and assign you the minimum user permissions in {{site.data.keyword.cloud_notm}} IAM.
Command-line tools: For quick access to your resources from the command line, try the {{site.data.keyword.cloud-shell_notm}}{: external}. Otherwise, set up your local command-line environment by completing the following steps.
- Install the {{site.data.keyword.cloud_notm}} CLI (
ibmcloud), {{site.data.keyword.containershort_notm}} plug-in (ibmcloud oc), and {{site.data.keyword.registrylong_notm}} plug-in (ibmcloud cr). - Install the {{site.data.keyword.openshiftshort}} (
oc) and Kubernetes (kubectl) CLIs.
{: #openshift_create_cluster} {: step}
Create a {{site.data.keyword.openshiftlong_notm}} cluster. To learn about what components are set up when you create a cluster, see the Service architecture. {{site.data.keyword.openshiftshort}} is available for only standard clusters. You can learn more about the price of standard clusters in the frequently asked questions. {: shortdesc}
-
Log in to the account and resource group where you want to create {{site.data.keyword.openshiftshort}} clusters. If you have a federated account, include the
--ssoflag.ibmcloud login [-g <resource_group>] [--sso]
{: pre}
-
Create a cluster with a unique name. The following command creates a version 4.8 cluster in Washington, DC with the minimum configuration of 2 worker nodes that have at least 4 cores and 16 GB memory so that default {{site.data.keyword.openshiftshort}} components can deploy. If you have existing VLANs that you want to use, get the VLAN IDs by running
ibmcloud oc vlan ls --zone <zone>. For more information, see Creating a standard classic cluster in the CLI.ibmcloud oc cluster create classic --name my_openshift --location wdc04 --version 4.8_openshift --flavor b3c.4x16.encrypted --workers 2 --public-vlan <public_VLAN_ID> --private-vlan <private_VLAN_ID> --public-service-endpoint
{: pre}
-
List your cluster details. Review the cluster State, check the Ingress Subdomain, and note the Master URL.
Your cluster creation might take some time to complete. After the cluster state shows Normal, the cluster network and Ingress components take about 10 more minutes to deploy and update the cluster domain that you use for the {{site.data.keyword.openshiftshort}} web console and other routes. Before you continue, wait until the cluster is ready by checking that the Ingress Subdomain follows a pattern of
<cluster_name>.<globally_unique_account_HASH>-0001.<region>.containers.appdomain.cloud.ibmcloud oc cluster get --cluster <cluster_name_or_ID>
{: pre}
-
Download and add the
kubeconfigconfiguration file for your cluster to your existingkubeconfigin~/.kube/configor the last file in theKUBECONFIGenvironment variable.ibmcloud oc cluster config --cluster <cluster_name_or_ID>
{: pre}
-
In your browser, navigate to the address of your Master URL and append
/console. For example,https://c0.containers.cloud.ibm.com:23652/console. -
From the {{site.data.keyword.openshiftshort}} web console menu bar, click your profile IAM#user.name@email.com > Copy Login Command. Display and copy the
oc logintoken command into your command line to authenticate via the CLI.Save your cluster master URL to access the {{site.data.keyword.openshiftshort}} console later. In future sessions, you can skip the
cluster configstep and copy the login command from the console instead. -
Verify that the
occommands run properly with your cluster by checking the version.oc version
{: pre}
Example output
Client Version: v4.8.0 Kubernetes Version: v1.22.4.2
{: screen}
If you can't perform operations that require Administrator permissions, such as listing all the worker nodes or pods in a cluster, download the TLS certificates and permission files for the cluster administrator by running the
ibmcloud oc cluster config --cluster <cluster_name_or_ID> --admincommand. {: tip}
{: #openshift_oc_console} {: step}
{{site.data.keyword.openshiftlong_notm}} comes with built-in services that you can use to help operate your cluster, such as the {{site.data.keyword.openshiftshort}} console. {: shortdesc}
{: #openshift_console4_overview_tutorial}
- From the {{site.data.keyword.openshiftshort}} clusters console{: external}, select your {{site.data.keyword.openshiftshort}} cluster, then click OpenShift web console.
- To work with your cluster in the CLI, click your profile
IAM#user.name@email.com> Copy Login Command. Display and copy theoc logintoken command into your command line to authenticate by using the CLI.
You can explore the following areas of the {{site.data.keyword.openshiftshort}} web console.
Administrator perspective : The Administrator perspective is available from the side navigation menu perspective switcher. From the Administrator perspective, you can manage and set up the components that your team needs to run your apps, such as projects for your workloads, networking, and operators for integrating IBM, Red Hat, 3rd party, and custom services into the cluster. For more information, see Viewing cluster information{: external} in the {{site.data.keyword.openshiftshort}} documentation.
Developer perspective : The Developer perspective is available from the side navigation menu perspective switcher. From the Developer perspective, you can add apps to your cluster in a variety of ways, such as from Git repositories,container images, drag-and-drop or uploaded YAML files, operator catalogs, and more. The Topology view presents a unique way to visualize the workloads that run in a project and navigate their components from sidebars that aggregate related resources, including pods, services, routes, and metadata. For more information, see Developer perspective{: external} in the {{site.data.keyword.openshiftshort}} documentation.
{: #openshift_console311_overview_tutorial}
Service Catalog
: The Service catalog is available from the dropdown menu in the OpenShift Container Platform menu bar. Browse the catalog of built-in services that you can deploy on {{site.data.keyword.openshiftshort}}. For example, if you already have a node.js app that is hosted on GitHub, you can click the Languages tab and deploy a JavaScript app. The My Projects pane provides a quick view of all the projects that you have access to, and clicking on a project takes you to the Application Console. For more information, see the {{site.data.keyword.openshiftshort}} Web Console Walkthrough{: external} in the {{site.data.keyword.openshiftshort}} documentation.
Application Console : The Application console is available from the dropdown menu in the OpenShift Container Platform menu bar. For each project that you have access to, you can manage your {{site.data.keyword.openshiftshort}} resources such as pods, services, routes, builds, images or persistent volume claims. You can also view and analyze logs for these resources, or add services from the catalog to the project. For more information, see the {{site.data.keyword.openshiftshort}} Web Console Walkthrough{: external} in the {{site.data.keyword.openshiftshort}} documentation.
Cluster Console : The Cluster console is available from the dropdown menu in the OpenShift Container Platform menu bar. For cluster-wide administrators across all the projects in the cluster, you can manage projects, service accounts,RBAC roles, role bindings, and resource quotas. You can also see the status and events for resources within the cluster in a combined view. For more information, see the {{site.data.keyword.openshiftshort}} Web Console Walkthrough{: external} in the {{site.data.keyword.openshiftshort}} documentation.
{: #openshift_deploy_app} {: step}
With {{site.data.keyword.openshiftlong_notm}}, you can create a new app and expose your app service via an {{site.data.keyword.openshiftshort}} Ingress controller for external users to use. {: shortdesc}
If you took a break from the last lesson and started a new command line, make sure that you log back in to your cluster. Open your {{site.data.keyword.openshiftshort}} web console at https://<master_URL>/console. For example, https://c0.containers.cloud.ibm.com:23652/console. Then from the menu bar, click your profile IAM#user.name@email.com > Copy Login Command. Display and copy the oc login token command into your command line to authenticate via the CLI.
{: tip}
-
Create a project for your Hello World app. A project is an {{site.data.keyword.openshiftshort}} version of a Kubernetes namespace with additional annotations.
oc new-project hello-world
{: pre}
-
Build the sample app from the source code{: external}. With the {{site.data.keyword.openshiftshort}}
new-appcommand, you can refer to a directory in a remote repository that contains the Dockerfile and app code to build your image. The command builds the image, stores the image in the local Docker registry, and creates the app deployment configurations (dc) and services (svc). For more information about creating new apps, see the {{site.data.keyword.openshiftshort}} docs{: external}.oc new-app --name hello-world https://github.com/IBM/container-service-getting-started-wt --context-dir="Lab 1"{: pre}
-
Verify that the sample Hello World app components are created.
-
List the hello-world services and note the service name. Your app listens for traffic on these internal cluster IP addresses unless you create a route for the service so that the Ingress controller can forward external traffic requests to the app.
oc get svc -n hello-world
{: pre}
Example output
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE hello-world ClusterIP 172.21.xxx.xxx <none> 8080/TCP 31m
{: screen}
-
List the pods. Pods with
buildin the name are jobs that Completed as part of the new app build process. Make sure that the hello-world pod status is Running.oc get pods -n hello-world
{: pre}
Example output
NAME READY STATUS RESTARTS AGE hello-world-1-9cv7d 1/1 Running 0 30m hello-world-1-build 0/1 Completed 0 31m hello-world-1-deploy 0/1 Completed 0 31m
{: screen}
-
-
Set up a route so that you can publicly access the hello world service. By default, the hostname is in the format of
<service_name>-<project>.<cluster_name>-<random_ID>.<region>.containers.appdomain.cloud. If you want to customize the hostname, include the--hostname=<hostname>flag. Note: The hostname that is assigned to your route is different than the Ingress subdomain that is assigned by default to your cluster. Your route does not use the Ingress subdomain.-
Create a route for the hello-world service.
oc create route edge --service=hello-world -n hello-world
{: pre}
-
Get the route hostname address from the Host/Port output.
oc get route -n hello-world
{: pre}
Example output
NAME HOST/PORT PATH SERVICES PORT TERMINATION WILDCARD hello-world hello-world-hello.world.<cluster_name>-<random_ID>.<region>.containers.appdomain.cloud hello-world 8080-tcp edge/Allow None
{: screen}
-
-
Access your app. Be sure to append
https://to your route hostname.curl https://hello-world-hello-world.<cluster_name>-<random_ID>.<region>.containers.appdomain.cloud
{: pre}
Example output
Hello world from hello-world-9cv7d! Your app is up and running in a cluster!
{: screen}
-
Optional To clean up the resources that you created in this lesson, you can use the labels that are assigned to each app.
-
List all the resources for each app in the
hello-worldproject.oc get all -l app=hello-world -o name -n hello-world
{: pre}
Example output
pod/hello-world-1-dh2ff replicationcontroller/hello-world-1 service/hello-world deploymentconfig.apps.openshift.io/hello-world buildconfig.build.openshift.io/hello-world build.build.openshift.io/hello-world-1 imagestream.image.openshift.io/hello-world imagestream.image.openshift.io/node route.route.openshift.io/hello-world{: screen}
-
Delete all the resources that you created.
oc delete all -l app=hello-world -n hello-world
{: pre}
-
{: #openshift_next}
For more information about working with your apps, see the {{site.data.keyword.openshiftshort}} developer activities{: external} documentation.
Install two popular {{site.data.keyword.openshiftlong_notm}} add-ons: {{site.data.keyword.la_full_notm}} and {{site.data.keyword.mon_full_notm}}.
