Istio Integration Project Calico on AWS—AWS Roadmap - Stories by Burak Tahtacıoğlu on Medium

Istio Integration Project Calico on AWS—AWS Roadmap - Stories by Burak Tahtacıoğlu on Medium

Stories by Burak Tahtacıoğlu on Medium

Istio Integration Project Calico on AWS—AWSRoadmap

Photo by Ilse Orsel onUnsplash

The first is kOps. kOps is a good default for many users and use-cases, as it gives you access to all of Calico’s flexible and powerful networking features. However, there are other options that may work better for your environment, for example: kubeadm Kubespray and, K3S For each option, we will highlight some of the key features or differentiators from the other options. These include kind, minikube, and microk8s, all of which are excellent tools for creating clusters for learning and initial demos, or proof-of-concept clusters.

You can consider using kOps to be a bit like using kubectl, but for managing clusters at a high level instead of for managing a single cluster. In this sense, it is a little like eksctl, but although kOps can deploy on AWS, it does not deploy using EKS. kOps builds production-grade, highly available clusters. It automates the provisioning of these clusters in many cloud environments, including AWS. AWS is officially supported. kOps fully supports all of Calico’s flexible and powerful networking features. It also provisions the necessary cloud infrastructure to support the cluster, for example, provisioning DNS and CloudFormation templates. It is built on a model that supports automatic idempotency. Idempotency means that, when run, the kOps tool will try to get a cluster into a specific pre-defined state. kubeadm is a tool with different goals. It can be considered a building block tool for making clusters. kubeadm’s goal is to create a minimum viable Kubernetes cluster that conforms to best practices.

Achieving a fully-featured cluster with kubeadmin is significantly more complex than with kops. However it is very flexible and can work across a much broader range of environments thank Ops. If your goal is to run only in AWS, it is likely that kOps is a simpler path to getting a cluster. If you want to use kubeadm to build a complete fully-featured cluster, you will likely need to use kubeadm as a building block in another ecosystem or as part of another installer tool. Kubespray is easier to use than kubeadm, and more flexible than kOps. It is still more complex to use than kOps, however. It installs Kubernetes using Ansible, and so if you already use Ansible it might be a good fit for your use case. Kubespray is designed to build highly available, fully customizable clusters. One Ansible playbook can build a cluster. K3S is slightly different from the other options we have covered. It is not a tool for installing a Kubernetes cluster instead, it is a lightweight distribution of Kubernetes in a singlebinary.

At the time of recording, the binary is less than 100 Megabytes in size. It comes with a quick-start install script, and because the installer is “opinionated”, it is easy to install K3S. The installer is opinionated in the sense that a lot of choices are made for you on which technologies to use for cluster components like ingress and network policy. The focus of the project is to deliver a fully compatible implementation that does not change any core Kubernetes functionality. However, due to its lightweight nature, it can run on low specification machines, and it installs quickly. It also has minimal OS dependencies. It is only production suitable in certain environments, and a default implementation is not scalable in the way that a full Kubernetes cluster is. For example, the default datastore is SQLite, which is perfectly suitable but cannot scale up in the way that etcd can. If you like the sound of K3S, you should carefully read up and understand the caveats compare dto a full Kubernetes cluster.

kOps Overview

kOps builds production-grade, highly available clusters. It automates the provisioning of these clusters in many cloud environments, including AWS, and the kOps team fully officially supports using kOps for AWS. kOps fully supports all of Calico’s flexible and powerful networking features. It also provisions the necessary cloud infrastructure to support the cluster, for example, provisioning DNS and CloudFormation templates. To install the kOps CLI tool, first, install kubectl and the AWS CLI. Then, install kOps using brew if you are on macOS. If you’re on Linux or using the WindowsSubsystem for Linux, you can install it fromGitHub.

Welcome - kOps - Kubernetes Operations

Here we can see the user is created and bound to an IAM group with the appropriate permissions. Finally, the keys are exported. kOps needs to use DNS when building a cluster. For a test cluster, Gossip-based DNS can be used. Gossip-based clusters use a peer-to-peer network instead of externally hosted DNS for propagating the K8s API address. This means that an externally hosted DNS service is not needed. Otherwise, we need to prepare somewhere to build the required DNS records. There are four scenarios covered in the kOps documentation and you should choose the one that most closely matches your AWS situation. The four scenarios are: A Domain purchased/hosted via AWS subdomain under a domain purchased or hosted via AWSSetting up Route53 for a domain purchased with another registrar Finally, a subdomain for clusters in Route53, leaving the domain at another registrar kOps needs to store the state and representation of thecluster.

This will become the source of truth for the cluster’s configuration. kOps uses AWS’s S3 service to store this data. S3 is Amazon’s object storage you can consider it to be like Google Drive or Dropbox. S3 stores objects in buckets. You need to create a dedicated S3 bucket for kOps. And, you should turn on versioning to allow rollback. Here we can see a bucket being created in the eu-west-1 AWS region, and versioning is enabled on the bucket. Now we’re ready to go ahead and create a cluster. As you can see here, first we export two variables. The first variable is called NAME and it contains the clustername.

Note that for a gossip-based cluster, this has to end with k8s.local. The second is called KOPS_STATE_STORE, and it contains the S3 URI. After that, it’s simply a case of running kOps create cluster followed by the AWS zone name and clustername.

Production setup - kOps - Kubernetes Operations

Creating an S3Bucket

Deciding on an S3 Bucket Name—S3 (Simple Storage Service) is Amazon’s AWS cloud-based object storage offering, accessible through a web interface. S3 buckets use globally unique names, so you can’t use kops.calico.ninja

Deciding on a Cluster Name—You can stick with the default cluster name we use, or replace the “kopscalico” part if you prefer. However, ensure that k8s.local is still appended to the end as this is not an Internet-facing cluster using DNS. As a result, for communication between the nodes, the communication uses the Gossip protocol. If your cluster does not end with k8s.local it won’t use Gossip and you’ll have trouble with the deployment. Using the Gossip protocol for lab purposes is highly recommended.

Gossip DNS - kOps - Kubernetes Operations

We must define our S3 bucket and cluster names in environment variables that kOps can read (KOPS_STATE_STORE) and we can use later (CLUSTER_NAME). Please keep in mind the s3 protocol prefix, as this is required:

export KOPS_STATE_STORE=s3://kops.calico.ninja
export CLUSTER_NAME=kopscalico.k8s.local
echo "export KOPS_STATE_STORE=${KOPS_STATE_STORE}" >> ccol2awsexports.sh
echo "export CLUSTER_NAME=${CLUSTER_NAME}" >> ccol2awsexports.sh

In CloudShell as usual, use the AWS S3 command to create the S3 bucket with your ownname:

aws s3 mb ${KOPS_STATE_STORE}

Deploying a Cluster withkOps

Run the following command from your CloudShell to move any existing.kube folder to a backup location. A new.kube folder will be automatically created by kOps in amoment.

mv ~/.kube ~/.kube.epochtime.$(date +%s).backup

Grab the name of your current available zone:

aws ec2 describe-availability-zones | grep RegionName | head -n 1

Then, set a REGION variable appropriately, like this. Edit these values if you’re using a different region:

export REGION=eu-west-1
echo "export REGION=${REGION}" >> ccol2awsexports.sh

Next, grab a list of the available zones in your currentregion:

aws ec2 describe-availability-zones | grep ZoneName

We already have our environment variable set with our S3 bucket, so now we can create our cluster configuration in that bucket by substituting in the names of any of the above zones in the following command, or keeping eu-west-1a and eu-west-1b if you’re happy withthem:

kops create cluster --zones eu-west-1a,eu-west-1b --networking calico --name ${CLUSTER_NAME}

We have not yet actually created any Kubernetes resources just a configuration. Now, to finalize our staged changes, and deploy the defined resources. We can update the cluster with the yesflag:

kops update cluster --name ${CLUSTER_NAME} --yes --admin

After a couple minutes, we can view our nodes that have been deployed (wait until this command shows 3 nodes before movingon):

kubectl get nodes -A

Nice. We can see from the STATUS that it’s Ready, and is ready to accept pods. Finally, let’s take a look at the running pods we already have in ourcluster:

kubectl get pods -A

For completeness though, you could get started via the CLI likethis:

aws s3 ls ${KOPS_STATE_STORE}/${CLUSTER_NAME}/

Now that we’ve had a look at the configuration, we can also use AWS CLI commands to see the Kubernetes nodes that have been created. Notice that we’re viewing these as EC2 instances:

aws ec2 describe-instances --query "Reservations[*].Instances[*].{PublicIPAdd:PublicIpAddress,InstanceName:Tags[?Key=='Name']|[0].Value,Status:State.Name}" --filters Name=instance-state-name,Values=running --output table

GitHub - GoogleCloudPlatform/microservices-demo: Sample cloud-native application with 10 microservices showcasing Kubernetes, Istio, gRPC and OpenCensus.

Deploying OnlineBoutique

Run the following command to deploy v0.2.2 of Online Boutique straight from the manifests in the microservices-demo repository:

kubectl apply -f https://raw.githubusercontent.com/GoogleCloudPlatform/microservices-demo/release/v0.2.2/release/kubernetes-manifests.yaml

Attempting to Deploy Istio Manifests

Before proceeding to the next steps, make sure all pods are running by using the following command:

kubectl get pods -A

Next we will attempt to deploy the Istio manifests for Online Boutique, again, straight from the manifests in the microservices-demo repository.

However, this will fail because we have not yet installed Istio on our cluster, so the cluster doesn’t recognize the specified resourcetypes.

Note that this has not done any harm to ourcluster:

kubectl apply -f https://raw.githubusercontent.com/GoogleCloudPlatform/microservices-demo/release/v0.2.2/release/istio-manifests.yaml

To create an ALB, we first need to grab the VPC ID of the VPC in which the kOps cluster wascreated:

export VPC_ID=`aws ec2 describe-vpcs --filters "Name=tag:Name,Values=$CLUSTER_NAME" --query Vpcs[].VpcId --output text`
echo "export VPC_ID=${VPC_ID}" >> ccol2awsexports.sh
echo $VPC_ID

Now, we need to create a clusterrole, clusterrole binding, and service account for the ALB to runas:

kubectl apply -f https://raw.githubusercontent.com/kubernetes-sigs/aws-alb-ingress-controller/v1.1.4/docs/examples/rbac-role.yaml

Now, create the AWS IAM policy to grant the appropriate IAM privileges for theALB:

wget https://raw.githubusercontent.com/kubernetes-sigs/aws-alb-ingress-controller/v1.1.4/docs/examples/iam-policy.json
aws iam create-policy --policy-name ALBIngressControllerIAMPolicy --policy-document file://iam-policy.json

From the above output note down the policy ARN and export it to a variable for use later. The Amazon Resource Name, or ARN, is the unique identifier for this IAMpolicy:

export POLICY_ARN=arn:aws:iam::012345678912:policy/ALBIngressControllerIAMPolicy
echo "export POLICY_ARN=${POLICY_ARN}" >> ccol2awsexports.sh

Now, set NODE_ROLE appropriately, asbelow.

export NODE_ROLE=nodes.$CLUSTER_NAME
echo "export NODE_ROLE=${NODE_ROLE}" >> ccol2awsexports.sh

Attach the node role to the policy created using theARN:

aws iam attach-role-policy --region=$REGION --role-name=$NODE_ROLE --policy-arn=$POLICY_ARN

Download the file alb-ingress-controller.yaml using wget. This is manifest that describes how to deploy an AWSALB:

wget https://raw.githubusercontent.com/kubernetes-sigs/aws-alb-ingress-controller/v1.1.4/docs/examples/alb-ingress-controller.yaml

We need to edit the YAML before we can deploy it. Edit alb-ingress-controller.yaml. If you’re familiar with the text editor vim, that’s already installed go ahead and use it. If you’re not familiar with vim, the text editor “nano” is easier to use. You can install it likethis:

sudo yum -y install nano

You need to edit three lines the ones specifying the cluster-name, aws-vpc-id, and aws-region.

You can start nano to edit the filewith:

nano alb-ingress-controller.yaml

Then edit the file and press Ctrl-X when done. If you are happy with your edits, press “y” then “enter” to save, otherwise, you can press “n” and start the edits over again. When you’re done, it should look like this, but with your three values substituted in:

Application Load Balancer | Elastic Load Balancing | Amazon Web Services

# Application Load Balancer (ALB) Ingress Controller Deployment Manifest.
# This manifest details sensible defaults for deploying an ALB Ingress Controller.
# GitHub: https://github.com/kubernetes-sigs/aws-alb-ingress-controller
apiVersion: apps/v1
kind: Deployment
metadata:
labels:
app.kubernetes.io/name: alb-ingress-controller
name: alb-ingress-controller
# Namespace the ALB Ingress Controller should run in. Does not impact which
# namespaces it's able to resolve ingress resource for. For limiting ingress
# namespace scope, see --watch-namespace.
namespace: kube-system
spec:
selector:
matchLabels:
app.kubernetes.io/name: alb-ingress-controller
template:
metadata:
labels:
app.kubernetes.io/name: alb-ingress-controller
spec:
containers:
- name: alb-ingress-controller
args:
# Limit the namespace where this ALB Ingress Controller deployment will
# resolve ingress resources. If left commented, all namespaces are used.
# - --watch-namespace=your-k8s-namespace
            # Setting the ingress-class flag below ensures that only ingress resources with the
# annotation kubernetes.io/ingress.class: "alb" are respected by the controller. You may
# choose any class you'd like for this controller to respect.
- --ingress-class=alb
            # REQUIRED
# Name of your cluster. Used when naming resources created
# by the ALB Ingress Controller, providing distinction between
# clusters.
- --cluster-name=kopscalico.k8s.local
            # AWS VPC ID this ingress controller will use to create AWS resources.
# If unspecified, it will be discovered from ec2metadata.
- --aws-vpc-id=vpc-0d42700c4a8ddefd3
            # AWS region this ingress controller will operate in.
# If unspecified, it will be discovered from ec2metadata.
# List of regions: http://docs.aws.amazon.com/general/latest/gr/rande.html#vpc_region
- --aws-region=eu-west-1
            # Enables logging on all outbound requests sent to the AWS API.
# If logging is desired, set to true.
# - --aws-api-debug
# Maximum number of times to retry the aws calls.
# defaults to 10.
# - --aws-max-retries=10
# env:
# AWS key id for authenticating with the AWS API.
# This is only here for examples. It's recommended you instead use
# a project like kube2iam for granting access.
#- name: AWS_ACCESS_KEY_ID
# value: KEYVALUE
            # AWS key secret for authenticating with the AWS API.
# This is only here for examples. It's recommended you instead use
# a project like kube2iam for granting access.
#- name: AWS_SECRET_ACCESS_KEY
# value: SECRETVALUE
# Repository location of the ALB Ingress Controller.
image: docker.io/amazon/aws-alb-ingress-controller:v1.1.4
serviceAccountName: alb-ingress-controller

Deploying an Application LoadBalancer

Now, we can apply the YAML you just edited. If you made any mistakes, you will get an error. Once you have something that looks like the example output, you’ve probably got it right and you’re ready to moveon:

kubectl apply -f  alb-ingress-controller.yaml

Now we should see the ALB ingress controller pod created and with STATUS “Running”:

kubectl get pods -n=kube-system

Now, we need to label the two subnets that we want to deploy our ALB resources in. Grab the subnet IDs associated with the VPC likethis:

aws ec2 describe-subnets --filters "Name=vpc-id,Values=${VPC_ID}" --query Subnets[].SubnetId --output table

Then, export variables SUBNET_ID1 and SUBNET_ID2 using the IDs from the aboveoutput:

export SUBNET_ID1=subnet-09f8f338be898c948
export SUBNET_ID2=subnet-04bb5f09859c664d0
echo "export SUBNET_ID1=${SUBNET_ID1}" >> ccol2awsexports.sh
echo "export SUBNET_ID2=${SUBNET_ID2}" >> ccol2awsexports.sh

Now we need to tag the subnet resources. Use these commands. They look incomplete but are correct and can be copied andpasted:

aws ec2 create-tags --resources $SUBNET_ID1 --tags Key=kubernetes.io/cluster/$CLUSTER_NAME,Value=shared
aws ec2 create-tags --resources $SUBNET_ID2 --tags Key=kubernetes.io/cluster/$CLUSTER_NAME,Value=shared
aws ec2 create-tags --resources $SUBNET_ID1 --tags Key=kubernetes.io/role/elb,Value=
aws ec2 create-tags --resources $SUBNET_ID2 --tags Key=kubernetes.io/role/elb,Value=

Finally, we can create an ingress to the online boutique service. Copy this YAMLin:

cat <apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
name: frontend-ingress
namespace: default
annotations:
kubernetes.io/ingress.class: alb
alb.ingress.kubernetes.io/scheme: internet-facing
labels:
app: frontend
spec:
rules:
- http:
paths:
- path: /*
pathType: Exact
backend:
service:
name: "frontend-external"
port:
number: 80
EOF

You can now grab the ingress URL. Refresh the following command a few times until it appears. It should end with “elb.amazonaws.com”:

kubectl get ingress

It was noted during course testing that occasionally the URL never appears. If after a few minutes there is no URL shown in the output, you can delete the ingress like this, and then recreate it using the previouscommand:

kubectl delete ingress frontend-ingress

Deleting an Application LoadBalancer

Remove the clusterrole, clusterrolebinding and serviceaccount:

kubectl delete -f https://raw.githubusercontent.com/kubernetes-sigs/aws-alb-ingress-controller/v1.1.4/docs/examples/rbac-role.yaml

Detach and delete the IAMpolicy:

aws iam detach-role-policy --role-name=$NODE_ROLE --policy-arn=$POLICY_ARN
aws iam delete-policy --policy-arn=$POLICY_ARN

Delete the ingress resource:

kubectl delete ingress frontend-ingress

Remove the subnettags:

aws ec2 delete-tags --resources $SUBNET_ID1 --tags Key=kubernetes.io/cluster/$CLUSTER_NAME,Value=shared
aws ec2 delete-tags --resources $SUBNET_ID2 --tags Key=kubernetes.io/cluster/$CLUSTER_NAME,Value=shared
aws ec2 delete-tags --resources $SUBNET_ID1 --tags Key=kubernetes.io/role/elb,Value=
aws ec2 delete-tags --resources $SUBNET_ID2 --tags Key=kubernetes.io/role/elb,Value=

Now examine the load balancer that wascreated:

aws elbv2 describe-load-balancers

Note down the “LoadbalancerARN”, and store it in a variable likethis:

export LOADBAL_ARN=arn:aws:elasticloadbalancing:eu-west-1:012345678912:loadbalancer/app/e84cfdb8-default-frontendi-3e1f/e1c6c4f9969e49ca
echo "export LOADBAL_ARN=${LOADBAL_ARN}" >> ccol2awsexports.sh

Note down the security group ID that the load balancer was deployed in, from the “describe-load-balancers” outputabove:

export LB_SG=sg-05034d4a228e2ce56
echo "export LB_SG=${LB_SG}" >> ccol2awsexports.sh

Now get the OwnerID of the VPC resource:

aws ec2 describe-vpcs --filters "Name=vpc-id,Values=${VPC_ID}" --query Vpcs[].OwnerId --output text

Store it in a variable likethis:

export OWNER_ID=012345678912
echo "export OWNER_ID=${OWNER_ID}" >> ccol2awsexports.sh

Now get the security group ID associated with thenodes:

aws ec2 describe-security-groups --filters "Name=vpc-id,Values=$VPC_ID" "Name=group-name,Values=nodes.$CLUSTER_NAME" --query SecurityGroups[].GroupId  --output text

Store it in a variable likethis:

export NODE_SG=sg-0204e8e6de1912c14
echo "export NODE_SG=${NODE_SG}" >> ccol2awsexports.sh

Now, delete the ingress rule added by the ALB in the node securitygroup:

aws ec2 revoke-security-group-ingress --group-id=$NODE_SG --source-group=$LB_SG --group-owner=$OWNER_ID --protocol=tcp --port=0-65535

Now delete the load balancer likethis:

aws elbv2 delete-load-balancer --load-balancer-arn=${LOADBAL_ARN}

Finally cleanup the security group that the load balancer was deployedin:

aws ec2 delete-security-group --group-id=$LB_SG

If you get a message about a “dependent object”, just wait a moment and run the command again. In fact, generally, any step in this unit can be repeated, if you find that you are having trouble deleting anything.

Istio

Introduction to Calico’s Istio Integration

Pod trafficcontrols

Lets you restrict ingress traffic inside and outside pods and mitigate common threats to Istio-enabled apps.

Supports securitygoals

Enables adoption of a zero-trust network model for security, including traffic encryption, multiple enforcement points, and multiple identity criteria for authentication.

Familiar policylanguage

Kubernetes network policies and Calico network policies work as is; users do not need to learn another network policy model to adoptIstio.

Configuring the FlexVolume Driver forDikastes

Dikastes is a component of Calico. It enforces a network policy for the Istio service mesh. It runs on a cluster as a sidecar proxy to Istio’s proxy, Envoy. In this way, Calico enforces network policy for workloads at both the Linux kernel (using iptables, L3-L4), and atL3-L7.

Configuring Dikastes

The Dikastes container running in each pod needs to speak to Felix, the per-node agent that manages routes, ACLs, and anything else required on the host to provide desired connectivity for the endpoints on thathost.

Red Hat Ecosystem Catalog

First, we have to patch the configuration that Felix and Dikastes use to synchronize policy:

calicoctl patch FelixConfiguration default --patch \
'{"spec": {"policySyncPathPrefix": "/var/run/nodeagent"}}'

Deploying Istio

Now that the FlexVolume driver has been configured for our cluster, we can deploy Istio. First, we download Istio 1.7.4 using curl, and store the tree of downloaded files in a new directory under our currentpath:

curl -L https://git.io/getLatestIstio | ISTIO_VERSION=1.7.4 sh -

Next, we can change directories into the new istio directory we just created on our CloudShell:

cd istio-1.7.4

Now that we’re in the Istio installation folder, we can install Istio in ourcluster:

./bin/istioctl install --set values.global.controlPlaneSecurityEnabled=true

This process can take a while. When it concludes, you should see the following output:

Detected that your cluster does not support third party JWT authentication. Falling back to less secure first party JWT. See https://istio.io/docs/ops/best-practices/security/#configure-third-party-service-account-tokens for details.
✔ Istio core installed
✔ Istiod installed
✔ Ingress gateways installed
✔ Installation complete

Updating the Istio SidecarInjector

The sidecar injector automatically modifies pods as they are created, to make them work with Istio. This step modifies the injector configuration to add Dikastes (a Calico component), as a sidecar container. We download some YAML and apply it as a patch to the istio-sidecar-injector ConfigMap to enable injection of Dikastes alongside Envoy:

curl https://docs.projectcalico.org/manifests/alp/istio-inject-configmap-1.7.yaml -o ~/istio-inject-configmap.yaml
kubectl patch configmap -n istio-system istio-sidecar-injector --patch "$(cat ~/istio-inject-configmap.yaml)"

Adding Calico Authorization Services to the ServiceMesh

Next, we apply the following manifest to configure Istio to query Calico for application layer policy authorization decisions:

kubectl apply -f https://docs.projectcalico.org/manifests/alp/istio-app-layer-policy-v1.7.yaml

Adding Namespace Label

You can control enforcement of application layer policy on a per-namespace basis using a Kubernetes label. To enable Istio and application layer policy in the default namespace, add the label likethis:

kubectl label namespace default istio-injection=enabled

Deploying Istio Manifests

Now that Istio is deployed, we can again attempt to install the Istio manifests from the Online Boutique microservices-demo repository. This time, it should workfine:

kubectl apply -f https://raw.githubusercontent.com/GoogleCloudPlatform/microservices-demo/release/v0.2.2/release/istio-manifests.yaml

Testing the IstioIngress

The following command will give us the external name of the Istio ingress gateway that has beencreated:

kubectl get service -n=istio-system istio-ingressgateway

Open a web browser and browse to the full address shown ending with elb.amazonaws.com (using HTTP, since this is only ademo).

The page should render correctly, and notice in particular that you can click around the entire site, including the shopping cart, which has the URL suffix“/cart”

Blocking the Shopping Cart with Application LayerPolicy

First, run this command and observe that there is not any Calico GlobalNetworkPolicy on this clusteryet:

calicoctl get globalnetworkpolicy -o yaml

Next, let’s apply some Calico GlobalNetworkPolicy to the cluster that demonstrates blocking some content with Application LayerPolicy.

This policy has two clauses. The first clause allows any HTTP access to the frontend for the HTTP path with the root URL “/”, and any URL prefixed with “/static”. The second clause implicitly blocks access to the remainder of the site, including blocking access to the shopping cart. Note that the second clause does not actually say “Deny”, but this is implicitly implied by the lack of an “Allow”. Of course, in a real deployment, you would spend the time to make your policy a little moresubtle!

curl https://raw.githubusercontent.com/tigera/ccol2aws/main/boutique-alp-demo.yaml -o ~/boutique-alp-demo.yaml; calicoctl apply -f ~/boutique-alp-demo.yaml

You can now examine thepolicy:

calicoctl get globalnetworkpolicy -o yaml

If you’ve jumped ahead to test the site already, you’ll notice that our policy is not yet working! That’s because there’s one more thing to do. We started the pods before configuring Istio. Therefore the Istio Sidecar Injector did not do itsjob.

We can confirm this by running the following command again. Notice that the frontend pod has only one running container. That confirms the sidecar pods are notrunning:

kubectl get pod

Restart your frontend pod like this. Substitute in the name of your frontend pod as it will be different from thisone:

kubectl delete pod frontend-5c4745dfdb-gcx6l

Now if you check the pods again, you should see that the READY state of the new frontend pod is 3/3 (showing that the sidecar containers have started):

kubectl get pod

If you now open a browser again, you should be able to load the front page of the Online Boutique demo, but if you try to click on the cart, you should be deniedaccess.

Cleanup

As we deployed our kOps cluster using the kOps tool, we will also use the kOps tool to deleteit:

kops delete cluster --name ${CLUSTER_NAME} --yes

Then, we can delete the S3bucket:

aws s3 rb ${KOPS_STATE_STORE}

If either of these stages give you any trouble, it likely means that something wasn’t deleted properly in the “Deploying an Application Load Balancer” module. If so, go back over that module and make sure you deleted everything.

See you in nextarticle…


本文章由 flowerss 抓取自RSS,版权归源站点所有。

查看原文:Istio Integration Project Calico on AWS—AWS Roadmap - Stories by Burak Tahtacıoğlu on Medium

Report Page