aws
(20)
api
(8)
java
(8)
javafx
(8)
networking
(7)
javascript
(5)
amplify
(3)
arduino
(3)
ec2
(3)
gamedev
(3)
react
(3)
website
(3)
appsync
(2)
csharp
(2)
docker
(2)
ecs
(2)
fractal
(2)
nextjs
(2)
simulation
(2)
unity
(2)
cloudwatch
(1)
gitlab
(1)
plop
(1)
rds-database
(1)
scaffolding
(1)
serverless
(1)
winston-logger
(1)
iot-hub-website
Amplify Setup
- In order to use Amplify you will need to generate and provide an access token for it to be able to build the frontend. The required scopes are
api
,read_api
&read_repository
- When deploying the stack to AWS, it can optionally be configured with a domain in Route53 via the domain params.
- When the stack deploys you will then need to trigger an Amplify job to build the web frontend similar to what's done in the
update-frontend
job in the.gitlab-ci.yml
file
Arduino Setup
-
Find your AWS Account & Region specific
AWS IoT Broker Endpoint
- This can be found here: https://console.aws.amazon.com/iot/home#/settings -
create the AWS IoT certificate
aws iot create-keys-and-certificate --set-as-active
The Response will be:
{
"certificateArn": "arn:aws:iot:{Region}:{AccountId}:cert/2c0f72bf-b230-4c49-9e0e-cbf177926d96",
"certificateId": "2c0f72bf-b230-4c49-9e0e-cbf177926d96",
"certificatePem": "-----BEGIN CERTIFICATE-----\n{Certificate}\n-----END CERTIFICATE-----\n",
"keyPair": {
"PublicKey": "-----BEGIN PUBLIC KEY-----\n{Public Key Material}\n-----END PUBLIC KEY-----\n",
"PrivateKey": "-----BEGIN RSA PRIVATE KEY-----\n{Private Key Material}\n-----END RSA PRIVATE KEY-----\n"
}
}
-
Prepare
arduino_secrets.h
file inarduino/iot-lightbulb/
- Enter your Wifi name & password
- Enter your Account & Region specific
AWS IoT Broker Endpoint
from Step 1 - Enter a unique identifier for the device.
- Enter the complete
Device Certificate
&Device Private Key
from Step 2
-
deploy the
template.yaml
including thecertificateArn
parameter from step 2 -
upload the
.ino
file to Arduino using the Arduino IDE -
The board will now listen on the
${DEVICE_ID}/${INCOMING_TOPIC}
topic for events targeted for this device
Usage
- Create a cognito user in the userpool
- Associate a device to the user in the
devicesTable
{
"userId": "${Cognito user sub}",
"deviceId": "light-bulb",
"name": "demo light bulb thing",
"type": "LIGHT_BULB"
}
This associated only allows logged in users to interact with their own devices
Devices before logging in
Login page
Devices after logging in
The On/Off
switch for the device will send a message to the arduino over MQTT to turn the LED On or Off.
Arduino setup
When the Arduino receives a message over MQTT it will either be told to turn the LED ON
or OFF
. The arduino will then set the voltage of the relevant data pin to turn the LED On or Off
AWS Webcoket API
This template.yaml defines a Websocket API
that uses dynamo to store active connectionIds and push updates downstream to connections
Setup:
- Deploy repository to AWS
- Get
WebSocketAPIendpoint
from the stack outputs - E.g.wss://ue22gqann8.execute-api.eu-west-2.amazonaws.com/$default
Usage
- Connect to
WebSocketAPIendpoint
- Every 5 mins the
connectionUpdaterFunction
will run and push a message to all active connections. - Send a configured action to the API such as
- Every 5 mins the
{
"action": "createuser"
}
EC2-gitlab-runner-fleet
Instance:
- ImageId: ami-084e8c05825742534 (eu-west-2)
- InstanceType: t2.micro
Runners:
- executor: docker
- image: gitlab-runner:latest
- privileged: true (docker-in-docker)
Setup:
- Get Gitlab runner token:
- create a group in your gitlab
- acquire a token for that group: https://gitlab.com/groups/YOUR_GROUP/-/runners
- Download the template.
- Go to AWS Cloudformation, create stack, upload template.
- enter a stack name, your token & a scalingGroupDesiredSize (1 will work).
- finish creating the stack.
- the ec2 instances & runners will be created.
- Cloudwatch logs /aws/ec2/STACK_NAME-terminal & /aws/ec2/STACK_NAME-gitlab will show the instances internal logs
- after less than 5 mins the runners will be visible in https://gitlab.com/groups/YOUR_GROUP/-/runners names after the instance they're running on.
Terminating an instance will result in the runner being unregistered before the instance is terminated
API-query-RDS
Setup:
- Deploy project to aws
- Copy the
RDSDBSecretArn
output by the stack - Go to RDS & select
Query
on the database - enter these credentials with the
Secrets manager ARN
as theRDSDBSecretArn
- create the
Customers
Table
CREATE TABLE Customers (
CustomerId int,
FirstName varchar(255)
)
- create a few example customers
INSERT INTO Customers (CustomerId, FirstName) VALUES (100, 'Name');
INSERT INTO Customers (CustomerId, FirstName) VALUES (101, 'Name1');
INSERT INTO Customers (CustomerId, FirstName) VALUES (102, 'Name2');
- make a
GET
request to theHttpApiUrl
output by the stack. Making a GET request tohttps://ID.execute-api.region.amazonaws.com/CustomerId/100
. You will recieve:
[
{
"CustomerId": 100,
"FirstName": "Name"
}
]
ECS-Github-Runners
setup
- Generating a Github Org token. An Admin will need to generate an access token allowing access to Webhooks & Self-Hosted Runners
- Deploy the stack using your cli of choice. You will need to specify:
- Your Organisation Name - githubOrgNameParam
- Your Access Token - githubOrgTokenParam
- When the stack finishes creating it will:
- Create a webhook on your organisation - The webhook id will be saved as an SSM Parameter
- Create the initial runner token to be used by the ecs runners. This token only lasts 60 minutes & will be rotated every 55 minutes
- The stack is now ready to receive webhook events. It will scale out an ECS container when it receives a webhook event for a queued workflow and scale it in once the job finishes
ECS Container
The container used by the Github runners is amazonlinux:2023
. It then installs a few libraries and registers the github runner to your org. It can take up-tp a minute to install the necessary libraries. Using a custom docker image can minimize start-up times and provide all the libraries commonmly needed for your workflows.
AVP-auth-api
This API is backed by a Lambda Authorizer that uses Amazon Verified Permissions
(AVP)
as a cedar policy store and authentication evaluation engine. This can faliciate RBAC
& ABAC
logic.
Setup
- Deploy the Application with a given stack name & AVP namespace (used in policy logic)
- Create a user in the deployed Cognito user pool, Add that user to the READ and or WRITE group & log in to the appClient to retrieve a JWT token.
- Make a request to the API url from the stack outputs with the Authorization header as the JWT token.
- If the Cognito user is in the Cognito read group then they'll have access to the /read endpoint. If they've in the write group then they'll have access to the /write endpoint.
AWS-Serverless-Multi-Region-API
This is a proof of concept global AWS API using the Serverless framwork.
It includes a base api
stack made up of a global dynamo table & it's replicas, A KMS key and many SSM parameters to pass relevant Ids & Arns to the node
stacks.
The node
stacks create a GraphQL API with a HTTP Proxy fronted by a Latency Route53 record with Cloudwatch metric or Lambda health checks for regional failover. The node
stacks are designed to be region agnostic and can be deployed into any region that already has a global table replica in. The GraphQL API communicates with the replica in its own region to provide the lowest possible latency to access data across the globe.
The base api
stack also includes a KMS key that's replicated in each node
stack. The intention of this was to be used by the Lambda Authorizer for the region specific GraphQL API to authorize encrypted credentials stored in the global table as a 'global' authentication mechanism. This was preferable compared to AWS managed regional API keys (would fail if requests were routed to the region that the key wasn't created in) or cognito authorization which are inherently non-global
AWS-Asynchronous-api
Implementation of an asynchronous AWS API using SQS to buffer & batch requests with dynamo as a central store for the events, their status and their result. The processFileFunction
artificially pauses for 2 - 8 seconds to simulate processing with message.
setup
Deploy the Stack - The API url will be Output from the stack.
Make a POST request to: the /StartQuery
endpoint with any body (it doens't validate the body) - You will recieve a response body like:
{
"MessageId": "4b39d05e-1add-40fe-8c77-6255e5700522",
"Status": "QUEUED"
}
Make a GET request to the /QueryResults
endpoint with the MessageId
query parameter - If the message is still in the QUEUED status then you will recieve:
{
"MessageId": "4b39d05e-1add-40fe-8c77-6255e5700522",
"Status": "QUEUED"
}
If The message is in the FINISHED
status you will recieve
{
"ttl": "1689212154",
"Status": "FINISHED",
"MessageId": "4b39d05e-1add-40fe-8c77-6255e5700522"
}
ecs-cluster & ecs-cluster-image-repo are individual repositories / AWS stacks
ecs-cluster
ecs-cluster can be deployed on its own given the imageRepoName
parameter is updated to point to an already existing image / ECR repo
Changing the image might need more resources to be assigned to the Task Defination
Setup
first, deploy ecs-cluster-image-repo
to AWS then run the deployDockerImage
Gitlab Job to publish the react-app as a docker image to an ECR repository
This job will also run the scripts/envs.sh
bash script that appends NEXT_PUBLIC_
to all AWS stack outputs, saves them all to a .env file and compiles it into the docker image for dynamic use of variables inside the image instead of hardcoding values.
The docker image also exposes port 8081 which the ecs-cluster
directs web traffic to.
secondly, deploy ecs-cluster
to AWS. The deployment will automatically create an ECS service with a running task. This task can be accessed via the load balancer DNS name that is exported as an output of the ecs-cluster
stack. e.g. http://ecs-cluster-lb-12345678.eu-west-2.elb.amazonaws.com/
The ecs-cluster
gitlab jobs can also be used to administrate ECS
restartAllTasks
- gets the number of tasks running e.g. 3 then launches 3 more while shutting down the previous 3. This process will refresh the image in the task if the image has been updated.
stopAllTasks
- stops all tasks to save costs
startOneTask
- starts 1 task
EC2 instances backed by EFS
The template in this repository defines an Applicaiton Load Balancer
, backed by an Auto Scaling Group
that launches public instances across 3 AZ's that mount a shared Elastic File System
Instance:
- ImageId: ami-084e8c05825742534 (eu-west-2)
- InstanceType: t2.micro
- UserData:
- install & configure amazon-cloudwatch-agent
- install & start httpd
- mount instance to template defined EFS
Setup:
- deploy the template - speficy stack parameters
- By default this will create a VPC with the default CIDR parameter. The Auto Scaling Group will launch 2 instances in that VPC behind the Load Balancer. Each instance will mount the EFS to the
efsMountPoint
directory & touch a file into that directory. - After resource creation, each instance will be streaming its terminal logs to cloudwatch. Checking the
efsMountPoint
direcotry in any instance will show as many files as there are running instances.
A-Star Path Finding Algorithm
Implementation of A-Star Path Finding Algorithm
setup
- Run the code
- Left click = start square
- right click = finishing square
result
Green - Start point
Red - End point
Dark-Blue - options that have been evaluation
Light-Blue - options that are next to be evaluated
Purple - The shortest path to the end
The Lost Dungeon 3
Made using Java 16 - Version 16.0.1
Included Libraries:openjfx-16
& com.google.code.gson:gson:1.4
Example room:
Example of the boss room:
cloudwatch-winston-logger
Example
const { Logger } = require('./utils/winstonLogger');
let logger = new Logger(context, { metadata0: 'xyz' });
let metadata1 = "abc";
let metadata2 = "def";
let metadata3 = "ghi";
logger.info(event);
logger.addMetadata({ metadata1 });
logger.info(event);
logger.addMetadata({ metadata2 }, { metadata3 });
logger.warn(event);
logger.removeMetadata('metadata1');
logger.http(event);
logger.removeMetadata('metadata2', 'metadata3');
logger.report(event);
Output
{"level":"info", "message":{"key":"value"}, "metadata0":"xyz"}
{"level":"info", "message":{"key":"value"}, "metadata0":"xyz", "metadata1":"abc"}
{"level":"warn", "message":{"key":"value"}, "metadata0":"xyz", "metadata1":"abc", "metadata2":"def", "metadata3":"ghi"}
{"level":"http", "message":{"key":"value"}, "metadata0":"xyz", "metadata2":"def", "metadata3":"ghi"}
{"level":"report", "message":{"key":"value"}, "metadata0":"xyz"}
Example
try {
await Promise.all(jobs);
} catch (err) {
logger.error(new Error(err));
}
Cloudwatch Output
{
"awsRequestId": "b04c613a-64e5-4fb6-b2c0-5085f971ded6",
"level": "error",
"message": "ReferenceError: jobs is not defined",
"stack": [
"Error: ReferenceError: jobs is not defined",
" at Runtime.handler (/var/task/src/index.js:34:18)",
" at Runtime.handleOnceNonStreaming (file:///var/runtime/index.mjs:1089:29)"
]
}
s3-cloudfront-distribution
cloudfront distribution with an s3 origin behind a domain
setup
- deploy the template to cloudformation
- enter the parameters
- optionally fill out
domainCertUSEast1
,domainName
,hostedZoneId
&subDomainName
for cloudformation to configure a CNAME & for route53 to create a recordSet. - upload the contents of /s3 to the origin & it will be accessiable from CloudFront
EC2-Gitlab-Instance
Instance:
- ImageId: ami-084e8c05825742534 (eu-west-2)
- InstanceType: t2.medium
Setup:
- deploy the template to cloudformation
- enter the parameters
- configuration with a domain:
- including the
hostedZoneId
,domainName
,subDomainName
&domainCertArn
parameters will:- Create a HTTPS loadbalancer targetGroup & listener with the
domainCertArn
on port 443 - Create a dns record in the
hostedZoneId
forsubDomainName.domainName
e.g.gitlab.example.co.uk
- Configure the gitlab instance for the domain
subDomainName.domainName
- Create a HTTPS loadbalancer targetGroup & listener with the
- not including the
hostedZoneId
,domainName
,subDomainName
&domainCertArn
parameters will:- Omit the HTTPS loadbalancer targetGroup & listener
- Not create any dns records
- Configure the gitlab instance to be accessible from the loadbalancer e.g.
{loadBalancerName}-1234567890.AWS::Region.elb.amazonaws.com
- including the
- Give time for the instance to create, it will be accessible from the dns record or the public ELB domain
Notes
Q: Why is a load balancer needed? A: The Gitlab CE installation creates & signs its own HTTPS certificate which some browsers warn about when trying to access the site. The load balancer allows port 443 to be listened on & inject your domain certificate when using HTTPS to resolve this issue.
Q: How do backups work?
A: The gitlab.rb file is configured to send the Gitlab backup
, the gitlab.rb
file & gitlab-secrets.json
. A backup will occur everyday at 00:00. A backup can also be preform by running the preform-backup SSM document.
The default username is root
& the userData script sets the password to gitlabRootPassword
stack parameter, The default being Password123!
Self-Managed Gitlab CE
Once you finish setting up Gitlab CE you can login, create groups & repos without issue. You can even clone them locally (setup ssh), add files then push them back to your Gitlab. Additionally, You can also register your own runners on a global or group level check this out. These runners can then create resources in aws using a template.yaml & gitlab-ci.yaml.
AWS-Transfer-Server
Setup:
- deploy the template to cloudformation
- enter the parameters
- configuration with a domain:
- including the
hostedZoneId
,domainName
&domainCertArn
parameters will:- Setup the
Certificate
on the server & enableFTPS
as a protocol - Create a
Route 53 Record
for thedomainName
- Setup the
- including the
hostedZoneId
,subDomainName
,domainName
&domainCertArn
parameters will:- Setup the
Certificate
on the server & enableFTPS
as a protocol - Create a
Route 53 Record
for thesubDomainName.domainName
- Setup the
- including the
Usage
The server's url will be output by the stack
The server can now be connected to over FTP
, SFTP
& FTPS
(if domain is configured)
FileZilla:
The Identity provider function will be invoked where the payload can be evaludated & integrated into your environment.
{
"username": "user",
"sourceIp": "10.23.0.6",
"protocol": "FTP",
"serverId": "s-0123456789ABCDEF",
"password": "password"
}
AWS-Amplify-Weather-App
uses the openweathermap api to retrieve data
Setup
- deploy weather-app-2-frontend to gitlab, make note of its project_id
- create an access token that allows
READ REPO
access to the frontend - this will be used by Amplify - get an api access token from openweathermap
- deploy backend & setup with CICD variables
FRONTEND_REPO_PROJECT_ID
,AMPLIFY_TOKEN
&WEATHER_API_TOKEN
- run the
update-frontend
CICD job on either repo to have Amplify get the most recent frontend commit on the specified branch & deploy it to the backend - check the Amplify website
Setup
deploy glue-bucket-crawler, wait for it to be finished then deploy test-api-v2
execute lambda test-api-v2-create-user with event:
{ "Username": "customer0", "Password": "Password123!", "Email": "a@b.com" }
go to the user in cognito, add them to the group test-api-v2-UserPoolClient-FileGroup
add file to S3 bucket athena-bucket-agb43j/customer0/file.json
Make a GET request to the /File endpoint with the Authorization: Basic Y3VzdG9tZXIwOlBhc3N3b3JkMTIzIQ==
All elements of the json file will be returned.
supporrted query parameters: token, updated_at, limit & page
File must contain updated_at timestamp e.g. 2022-11-12T19:48:02.404Z to use the updated_at query parameter.
Arduino-AWS-IoT-Core
Arduino
This uses the Arduino MKR WiFi 1010
to publish Temperature & Humidity sensor data to AWS Timestream
through AWS IoT Core
using MQTT
Setup
-
Find your AWS Account & Region specific
AWS IoT Broker Endpoint
- This can be found here: https://console.aws.amazon.com/iot/home#/settings -
create the AWS IoT certificate
aws iot create-keys-and-certificate --set-as-active
The Response will be:
{
"certificateArn": "arn:aws:iot:{Region}:{AccountId}:cert/2c0f72bf-b230-4c49-9e0e-cbf177926d96",
"certificateId": "2c0f72bf-b230-4c49-9e0e-cbf177926d96",
"certificatePem": "-----BEGIN CERTIFICATE-----\n{Certificate}\n-----END CERTIFICATE-----\n",
"keyPair": {
"PublicKey": "-----BEGIN PUBLIC KEY-----\n{Public Key Material}\n-----END PUBLIC KEY-----\n",
"PrivateKey": "-----BEGIN RSA PRIVATE KEY-----\n{Private Key Material}\n-----END RSA PRIVATE KEY-----\n"
}
}
-
Prepare
arduino_secrets.h
file- Enter your Wifi name & password
- Enter your Account & Region specific
AWS IoT Broker Endpoint
from Step 1 - Enter a unique identifier for the device.
- Enter the complete
Device Certificate
&Device Private Key
from Step 2
-
deploy the
template.yaml
including thecertificateArn
parameter from step 2. The template will listen on a topic with the same name as the stack. -
upload the
.ino
file to Arduino using the Arduino IDE -
The board will now publish the Temperature & Humidity data from the
DHT22
sensor and publish it to Timestream throughAWS IoT Core
Arduino Logs
Timestream Example
Arduino-AWS-IoT-Core
Arduino
This uses the Arduino MKR WiFi 1010
to publish to AWS IoT
over MQTT
Setup
-
Find your AWS Account & Region specific
AWS IoT Broker Endpoint
- This can be found here: https://console.aws.amazon.com/iot/home#/settings -
create the AWS IoT certificate
aws iot create-keys-and-certificate --set-as-active
The Response will be:
{
"certificateArn": "arn:aws:iot:{Region}:{AccountId}:cert/2c0f72bf-b230-4c49-9e0e-cbf177926d96",
"certificateId": "2c0f72bf-b230-4c49-9e0e-cbf177926d96",
"certificatePem": "-----BEGIN CERTIFICATE-----\n{Certificate}\n-----END CERTIFICATE-----\n",
"keyPair": {
"PublicKey": "-----BEGIN PUBLIC KEY-----\n{Public Key Material}\n-----END PUBLIC KEY-----\n",
"PrivateKey": "-----BEGIN RSA PRIVATE KEY-----\n{Private Key Material}\n-----END RSA PRIVATE KEY-----\n"
}
}
-
Prepare
arduino_secrets.h
file- Enter your Wifi name & password
- Enter your Account & Region specific
AWS IoT Broker Endpoint
from Step 1 - Enter the complete
Device Certificate
&Device Private Key
from Step 2
-
deploy the
template.yaml
including thecertificateArn
parameter from step 2 -
upload the
.ino
file to Arduino using the Arduino IDE -
Published messages will now invoke a Lambda
- Arduino code
- Cloudwatch Logs
Arduino Logs
wave-function-collapse
Initial testing:
The small boxes are representing a blank tile
tests with more complicated tile sheets after adding dynamic rules
addition of asymmetric tiles
proof of concept dungeon layout
could be used for The-Lost-Dungeon-4?
Appsync GraphQl API
This template.yaml defines an AppSync API
that uses dynamo resolvers to directly interface with a dynamo table
Setup:
- deploy the template - speficy stack parameters
- setup without a domain
- leave the
domainCertArn
,domainName
,hostedZoneId
&subDomainName
blank. The stack outputs will be theAPI url
& anAPI token
- leave the
- setup with a domain
- enter a value for
domainCertArn
,domainName
&hostedZoneId
. This will create an Appsync domain association and a route53 record
- enter a value for
- setup with a domain and subdomain
- enter a value for
domainCertArn
,domainName
,hostedZoneId
&subDomainName
. This will create an Appsync domain association and a route53 record forsubDomainName.domainName
- enter a value for
- setup without a domain
Usage
creating a post
mutation addPost {
addPost(
author: "AUTHORNAME"
title: "Our first post!"
content: "This is our first post."
url: "https://aws.amazon.com/appsync/"
) {
id
author
title
content
url
}
}
This mutation will return:
{
"data": {
"addPost": {
"id": "8b909b4c-77c0-4aab-a44f-34e7fd7e04b7",
"author": "AUTHORNAME",
"title": "Our first post!",
"content": "This is our first post.",
"url": "https://aws.amazon.com/appsync/"
}
}
}
This id
can be used with the getPost
Query
getting a post
query getPost {
getPost(id: "8b909b4c-77c0-4aab-a44f-34e7fd7e04b7") {
id
author
title
content
url
}
}
This query will return:
{
"data": {
"getPost": {
"id": "8b909b4c-77c0-4aab-a44f-34e7fd7e04b7",
"author": "AUTHORNAME",
"title": "Our first post!",
"content": "This is our first post.",
"url": "https://aws.amazon.com/appsync/"
}
}
}