Skip to content

k8s-unicorn/cross-cloud

 
 

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

933 Commits
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 

Repository files navigation

Cross-cloud Continuous Integration

Why Cross-cloud CI?

Our CI Working Group has been tasked with demonstrating best practices for integrating, testing, and deploying projects within the CNCF ecosystem across multiple cloud providers.

Help ensure the CNCF projects deploy and run sucessfully on each supported cloud providers.

What is Cross-cloud?

A project to continually validate the interoperability of each CNCF project, for every commit on stable and HEAD, for all supported cloud providers with the results published to the Cross-cloud public dashboard. The Cross-cloud project is composed of the following components:

  • Cross-project CI - Project app and e2e test container builder / Project to Cross-cloud CI integration point
    • Builds and registers containerized apps as well as their related e2e tests for deployment. Triggers the Cross-cloud CI pipeline.
  • Cross-cloud CI - Multi-cloud container deployer / Multi-cloud project test runner
    • Triggers the creation of K8s clusters on cloud providers, deploys containerized apps, and runs upstream project tests supplying results to the Cross-cloud dashboard.
  • Multi-cloud provisioner - Cloud end-point provisioner for Kubernetes
    • Supplies conformance validated Kubernetes end-points for each cloud provider with cloud specific features enabled
  • Cross-cloud CI Dashboard -
    • Provides a high-level view of the interoperability status of CNCF projects for each supported cloud provider.

How to Use Cross-cloud

You have to have a working Docker environment

Quick start for AWS

Pre-reqs: IAM User with the following Permissions:

  • AmazonEC2FullAccess
  • AmazonS3FullAccess
  • AmazonRoute53DomainsFullAccess
  • AmazonRoute53FullAccess
  • IAMFullAccess
  • IAMUserChangePassword

AWS credentials

export AWS_ACCESS_KEY_ID="YOUR_AWS_KEY_ID"
export AWS_SECRET_ACCESS_KEY="YOUR_AWS_SECRET_KEY"
export AWS_DEFAULT_REGION=”YOUR_AWS_DEFAULT_REGION” # eg. ap-southeast-2

Run the following to provision a Kubernetes cluster on AWS:

docker run \
  -v /tmp/data:/cncf/data \
  -e NAME=cross-cloud
  -e CLOUD=aws    \
  -e COMMAND=deploy \
  -e BACKEND=file  \ 
  -e AWS_ACCESS_KEY_ID= $AWS_ACCESS_KEY_ID    \
  -e AWS_SECRET_ACCESS_KEY=$AWS_SECRET_ACCESS_KEY    \
  -e AWS_DEFAULT_REGION=$AWS_DEFAULT_REGION    \
  -ti registry.cncf.ci/cncf/cross-cloud/provisioning:ci-stable-v0-2-0
Quick start for GCE

Pre-reqs: Project created on Google Cloud (eg. test-cncf-cross-cloud)

Google Cloud JSON configuration file for authentication. (This file is downloaded directly from the Google Developers Console)

  1. Log into the Google Developers Console and select a project.
  2. The API Manager view should be selected, click on "Credentials" on the left, then "Create credentials," and finally "Service account key."
  3. Select "Compute Engine default service account" in the "Service account" dropdown, and select "JSON" as the key type.
  4. Clicking "Create" will download your credentials.
  5. Rename this file to credentials-gce.json and move to your home directory (~/credentials-gce.json)

Google Project ID

  1. Log into the Google Developers Console to be sent to the Google API library page screen
  2. Click the Select a project drop-down in the upper left
  3. Copy the Project ID for the desired project from the window that shows Eg. test-cncf-cross-cloud

Run the following to provision a Kubernetes cluster on GCE:

export GOOGLE_CREDENTIALS=$(cat ~/credentials-gce.json)
docker run \
  -v /tmp/data:/cncf/data  \
  -e NAME=cross-cloud  \
  -e CLOUD=gce    \
  -e COMMAND=deploy  \
  -e BACKEND=file  \ 
  -e GOOGLE_REGION=us-central1    \
  -e GOOGLE_PROJECT=test-cncf-cross-cloud  \
  -e GOOGLE_CREDENTIALS=”${GOOGLE_CREDENTIALS}”
  -ti registry.cncf.ci/cncf/cross-cloud/provisioning:ci-stable-v0-2-0
Quick start for OpenStack

You will need a full set of credentials for an OpenStack cloud, including authentication endpoint.

Run the following to provision an OpenStack cluster:

docker run \
  -v $(pwd)/data:/cncf/data \
  -e NAME=cross-cloud \
  -e CLOUD=openstack \
  -e COMMAND=deploy \
  -e BACKEND=file \
  -e TF_VAR_os_auth_url=$OS_AUTH_URL \
  -e TF_VAR_os_region_name=$OS_REGION_NAME \
  -e TF_VAR_os_user_domain_name=$OS_USER_DOMAIN_NAME \
  -e TF_VAR_os_username=$OS_USERNAME \
  -e TF_VAR_os_project_name=$OS_PROJECT_NAME \
  -e TF_VAR_os_password=$OS_PASSWORD \
  -ti registry.cncf.ci/cncf/cross-cloud/provisioning:ci-stable-v0-2-0

General usage and configuration

Minimum required configuration to use Cross-cloud to deploy a Kubernetes cluster on Cloud X.

docker run \
  -v /tmp/data:/cncf/data \
  -e NAME=cross-cloud
  -e CLOUD=<aws|gke|gce|openstack|packet>    \
  -e COMMAND=<deploy|destory> \
  -e BACKEND=<file|s3>  \ 
  <CLOUD_SPECIFIC_OPTIONS>
  <KUBERNETES_CLUSTER_OPTIONS>
  -ti registry.cncf.ci/cncf/cross-cloud/provisioning:ci-stable-v0-2-0

Common Options

  • -e CLOUD=<aws|gke|gce|openstack|packet> # Choose the cloud provider. Then add the appropriate cloud specific options below.
  • -e COMMAND=<deploy|destory>
  • -e BACKEND=<file|s3> # File will store the Terraform State file to Disk / S3 will store the Terraform Statefile to a AWS s3 Bucket

Cloud Specific Options

AWS:

  • -e AWS_ACCESS_KEY_ID=secret
  • -e AWS_SECRET_ACCESS_KEY=secret
  • -e AWS_DEFAULT_REGION=ap-southeast-2

Packet:

  • -e PACKET_AUTH_TOKEN=secret
  • -e TF_VAR_packet_project_id=secret
  • -e DNSIMPLE_TOKEN=secret
  • -e DNSIMPLE_ACCOUNT=secret

GCE/GKE:

  • -e GOOGLE_CREDENTIALS=secret
  • -e GOOGLE_REGION=us-central1
  • -e GOOGLE_PROJECT=test-163823

OpenStack:

  • -e TF_VAR_os_auth_url=$OS_AUTH_URL
  • -e TF_VAR_os_region_name=$OS_REGION_NAME
  • -e TF_VAR_os_user_domain_name=$OS_USER_DOMAIN_NAME
  • -e TF_VAR_os_username=$OS_USERNAME
  • -e TF_VAR_os_project_name=$OS_PROJECT_NAME
  • -e TF_VAR_os_password=$OS_PASSWORD

Kubernetes Cluster Options

Custom Configuration options for the Kubernetes Cluster:

  • -e TF_VAR_pod_cidr=10.2.0.0/16 # Set the Kubernetes Cluster POD CIDR
  • -e TF_VAR_service_cidr=10.0.0.0/24 # Set the Kubernetes Cluster SERVICE CIDR
  • -e TF_VAR_worker_node_count=3 # Set the Number of Worker nodes to be Deployed in the Cluster
  • -e TF_VAR_master_node_count=3 # Set the Number of Master nodes to be Deployed in the Cluster
  • -e TF_VAR_dns_service_ip=10.0.0.10 # Set the Kubernetes DNS Service IP
  • -e TF_VAR_k8s_service_ip=10.0.0.1 # Set the Kubernetes Service IP

Additional Documentation

  • FAQ - Frequently Asked Questions

CI Status Dashboard Views

Current Phase: In Design/Planning

cncf-dashboard_web_overview_v3-2default-b

cncf-dashboard_web_deployment-view_v3-2-default

Meetings / Demos

Upcoming

  • December 26th, 2017 - CI-WG Status Update on 4th Tuesday at 8am Pacific: Meeting canceled due to the holidays
  • January 3rd, 2018 - Cross Cloud project demo with Camille Fournier
  • January 9th, 2018 - CI-WG Status Update on 2nd Tuesday at 8am Pacific
  • January 23rd, 2018 - CI-WG Status Update on 4th Tuesday at 8am Pacific
  • January, 2018 - Intro call with Packet+Arm team, TBD

Past

About

Cross Cloud Continuous Integration

Resources

License

Stars

Watchers

Forks

Releases

No releases published

Packages

 
 
 

Contributors

Languages

  • HCL 87.7%
  • Shell 12.3%