Chris Padilla/Blog
My passion project! Posts spanning music, art, software, books, and more. Equal parts journal, sketchbook, mixtape, dev diary, and commonplace book.
- Spring Boot runs on port
8080by default. In my configurationapplication.properties, I've set the port to80. This is the default HTTP port and makes it so that, on the EC2 server, I'll be able to access the app. Otherwise, instead of "myapp.com", I would have to access "myapp.com:8080". To match both within and without the container, I'm setting the port config. - I'm setting my environment variables on both. The default port for PostgreSQL is
5432, so that's where the db url points to. - Hibernate is an ORM for Java objects to SQL/relational databases. Here I'm specifying that Hibernate should update the SQL schema based on my applications model configuration.
- Ensure you have a VPC created. The default is fine if you have it.
- Instantiate your EC2, configured to Linux.
- Generate your key pair
- Edit the security group to allow inbound HTTP requests
- Docker
- Docker Compose
- Git
- Maven (or whichever build tool you are using)
- Add the current user to docker:
sudo usermod -aG docker $USER sudo reboot - Clone your git repo to the server (prereq: Upload your project to GitHub!)
git clone ssh://john@example.com/path/to/my-project.git - Build the application locally
mvn package- We'll have to move the jar file to the docker directory once again.
- Navigate to the docker directory.
cd src/main/docker - Build the docker image
docker-compose -f docker-compose.yml build - Run the container with
docker-compose upordocker-compose up -dto run in the background and keep it running after you exit the server. - Git push changes
- SSH back into the server
- Clone the repo
- Rebuild the executable
- Rebuild the docker image
- Rerun the docker container
next()returns the first value of an iterator. In subsequent calls, it would return the following item.nextrequires an iterator. An iterator yields number of objects on the fly. This is different from a list which contains and stores values. Lists and tuples can be converted to iterators with theiter()method.- In my example above, the list comprehension
x for x in db_dataabove yields an iterator, covering our type requirement for next. - We're filtering by matching another value:
if x['PropertyId'] == parsed_incoming_data['propertyId] - From the base component, elevate as much of the logic as possible. The child should essentially be a "view" component only concerned with rendering data.
- Pass any interactivity down through callbacks such as "onClick", "onMouseEnter", etc.
- Initial function start, get a list of pages to crawl.
- Sequentially crawl those pages. (separate lambda function called sequentially)
- After page crawl, send an update on the status of the crawl and update the DB with results.
Still Life
Deploying Docker Compose Application to AWS EC2
Many deployment platforms (Vercel, Heroku, Render) add a great amount of magic and convenience to the process of publishing a web app. But is it all that difficult to work without some of the tooling?
I wanted to find out. So this week I put on my DevOps hat and decided to get my hands dirty!
My aim was to take an app I had built, wrap it up along with a database into a docker container, and deploy to AWS. All without extra bells and whistles β No Fargate, no Elastic Beanstalk, no CI/CD integration just yet. Just a simple Linux server on EC2!
In this case, it's a Java Spring Boot app with a PostgreSQL db. Though, since it's wrapped up in docker compose, this post will apply to any containerized app.
Here we go!
Local Setup
Write the Dockerfile
Assuming I already have my app built, we'll write the docker file for it. I'm going to store it under src/main/docker for organization. We'll also keep it pretty simple for the application:
FROM openjdk:17-oracle
COPY . /app
ENTRYPOINT ["java", "-jar", "app/app.jar"]All that's happening here is I'm using the Java image for the version I'll build with. Then I'll copy the contents into the container. And lastly, I'll kick off the program with java -jar app/app.jar
Build the Executable
If you're not running Spring Boot, feel free to skip ahead! Here's how I'm setting up my executable:
To build my app, I'm going to run mvn clean package. This will generate a jar file in my target folder. From there, I'll simply move it over to the docker directory with the linux command:
cp target/demo-0.0.1-SNAPSHOT.jar src/main/docker/app.jarWrite the Docker Compose Config
Next is the docker compose file. This is where I'm bringing in the PostgreSQL db and wrapping it up with my app. Here's the file:
services:
app:
container_name: spring-boot-postgresql
image: 'docker-spring-boot-postgres:latest'
build:
context: .
dockerfile: Dockerfile
ports:
- "80:80"
depends_on:
- db
environment:
- SPRING_DATASOURC_URL=jdbc:postgresql://db:5432/compose-postgres
- SPRING_DATASOURCE_USERNAME=compose-postgres
- SPRING_DATASOURCE_PASSWORD=compose-postgres
- SPRING_JPA_HIBERNATE_DDL_AUTO=update
db:
image: 'postgres:13.1-alpine'
container_name: db
environment:
- POSTGRES_USER=compose-postgres
- POSTGRES_PASSWORD=compose-postgresapp and db are the individual images here for my container. For each, I'm pulling the relevant dependency images for Spring and PostgreSQL respectively. Under app.build We're setting the context to be the current director (src/main/docker) and pulling the docker file from there.
A few areas specific to my setup:
AWS Setup
At this point, I'll point you to the AWS docs for setting up an EC2 instance. Here's the gist:
Once your EC2 is up, it's time to SSH into it!
SSH and Installs
From your local machine, grab your key pair as well as the public DNS addres. (You can find instructions on the instance page after clicking "connect")
ssh -i /main-keypair.pem ec2-user@ec2-34-75-385-24.compute-1.amazonaws.comThe most magical part to me: after that, you'll be logged in and accessing the Linux terminal on your server!!
Since it's simply a Linux server, we can install all the dependencies we need just as if we were doing it on our own machine.
From here, install:
After that, here's how we'll get the code onto our server:
After that, accessing the public DNS address should show your app up and running!
Automation
Now the app is up! However, what if we need to make changes to the app? It's not a back-breaking process, but it would involve a few steps:
Something that is certainly doable in a few minutes. But it screams for automation, doesn't it?
The next step for me would be to embrace that automation. Knowing the steps individually for deploying an app to a Linux server, I would be taking a look at tools such as GitHub Actions or CircleCI to automate much of this process.
Then, of course, there are many more considerations for a real world app. Performance monitoring, error logging, automatic scaling, load balancing β just to name a few!
It was great to take a deep dive on deployment in isolation! On to exploring further tooling to support that process.
White Coat
Blog Post Syntax Highlighting
I've added syntax highlighting to the blog! Long overdue. Here's how I made it happen:
Setup
This site is a Next.js app. The blog posts are generated with the built in Static Site Generation feature. For each post, I grab all the urls to render and then they are constructed at build time:
import AlbumPage from '/components/albumPage';
import { getAllPosts, getAlbumBySlug, getAlbums } from '../lib/api';
import { getPostBySlug } from '../lib/markdownAccess';
import PostPage from '/components/PostPage';
import markdownToHtml from '../lib/markdownToHtml';
// The Main Component
export default function SlugPage({ post, album }) {
if (post) return <PostPage post={post} />;
if (album) return <AlbumPage album={album} />;
}
// Get static props - gather required page data based on page
export async function getStaticProps({ params }) {
// . . .
const post = getPostBySlug(params.slug, [...);
if (post) {
return {
props: {
post,
},
};
}
return {
notFound: true,
};
}
// Get the static paths for all posts and pages
export async function getStaticPaths() {
const posts = getAllPosts(['slug']);
const albums = getAlbums();
const slugs = [...albums, ...posts].map((contentObj) => contentObj.slug);
return {
paths: slugs.map((slug) => {
return {
params: {
slug,
},
};
}),
fallback: 'blocking',
};
}The post object contains the raw markdown and meta data for the page. All of the site's pages are built from that markdown and are rendered to JSX through this component:
import markdownStyles from './markdown-styles.module.css';
import Markdown from 'markdown-to-jsx';
import Link from 'next/link';
import Image from 'next/image';
import NextLink from './NextLink';
export default function PostBody({ content }) {
return (
<div className="markdown">
<Markdown
options={{
overrides: {
a: NextLink,
img: BlogImage,
},
}}
>
{content}
</Markdown>
</div>
);
}
// const BlogImage = (props) => <Image {...props} width={800} layout="fill" />;
const BlogImage = (props) => (
<a href={props.src} target="_blank" rel="noopener noreferrer">
<img {...props} />
</a>
);Markdown to JSX is doing the heavy lifting of rendering my markdown annotations to html. I've also plugged in a few custom overrides to make use of Next features, such as the NextLink to handle routing through the app, as well as an img override to open in a new tab by default.
Adding In Highlight.js
Highlight.js is a flexible library that can do exactly what I'm looking for, both on the client and server.
Since I'm building static pages, I'll reach for their server implementation to call:
html = hljs.highlightAuto('<h1>Hello World!</h1>').valueI could use their client side approach, wrapped up in a useEffect. However, that adds to the js bundle sent down the wire. Not to mention, I'd get this ugly flicker effect once the styles kicked in.
So, I'm opting to build another override.
Markdown renders code in a <pre> and nested <code> tag. So I'll add my own components to plugin the synchronous syntax highlighting:
First, importing highlight.js and adding my override:
import hljs from 'highlight.js';
export default function PostBody({ content }) {
return (
<div className="markdown">
<Markdown
options={{
overrides: {
a: NextLink,
img: BlogImage,
pre: Pre,
},
}}
>
{content}
</Markdown>
</div>
);
}And then writing my custom components:
const CodeBlock = ({className, children}) => {
children = hljs.highlightAuto(children, ['java', 'javascript', 'python', 'react', 'yaml']).value;
return (
<pre>
<code dangerouslySetInnerHTML={{__html: children}} />
</pre>
);
}
const Pre = ({children, ...rest}) => {
return <pre {...rest}>{children}</pre>;
}ViolΓ ! The colors you see above are thanks to these changes!
New Album β Dog Angst πΆ
Lucy's been listening to my teenage emo CDs! Now she's all moody.
Purchase on π€ Bandcamp and Listen on π Spotify or any of your favorite streaming services!
Faber β The Medieval Piper
Been enjoying sightreading short and sweet 5-finger pieces like this.
Desk Dino
Comparison Sorting in Python
Of all the pages of official docs in Python, the page on sorting by Andrew Dalke and Raymond Hettinger may be my favorite. Clear, gradually continues on in complexity, and provides ample examples along the way.
Here's my situation this week: Simple enough, I needed to sort a list of dictionaries by a property on those dictionaries:
data = [
{
'name': 'Chris',
'count': 5,
},
{
'name': 'Lucy',
'count': 3,
},
{
'name': 'Clyde',
'count': 3,
},
{
'name': 'Miranda',
'count': 10,
},
]To sort by the count, I could pass a lambda to access the property on the dict:
sorted(data, key=lambda x: x['count'])Say that the counts are equal as it is with Lucy and Clyde in my example. If so, I would want to sort by the name.
Returning a tuple covers that:
sorted(data, key=lambda x: (x['count'], x['name']))To reverse the order, there's a named property for that:
sorted(data, key=lambda x: (x['count'], x['name']), reverse=True)Problem all sorted out!
Filter for the First Match in Python
match = next(x for x in db_data if x["PropertyId"] == parsed_incoming_data["PropertyId"])Breaking it down:
VoilΓ ! Filtering for a match in one line.
Jody Fisher β Triad Etude
Enjoying the space of these chords π
Sunset Foliage
Abstraction between React Components
As Jamison Dance put it in this week's episode of the Soft Skills Engineering podcast: "It takes just one more level of abstraction to be a great engineer!"
A task at work this week had me looking to decouple one of our components that uses a third party library. Let's say it's a bar graph that uses something like Chartist.js. We want to be able to reuse this chart in both full page settings as well as where it's a widget inserted into a page. In one case, clicking a bar would open a tooltip. In another, it may link to another page.
Normally, here are my considerations for doing that:
That works fine in this example below:
const BarGraphContainer = (props) => {
const dispatch = useDispatch();
const onClickBar = (segment) => {
dispatch(actions.openModal(segment));
};
return (
<BarGraph
onClickBar={onClickBar}
/>
);
};The Problem
In this instance, the base component is handling a third party library for initialization, setup, and much of the internal interactions. In some cases, I want to control firing off an event in that internal library (ex: opening a tool tip on click). But in other cases, I want an external behavior (linking away or opening a modal.)
Passing Context Up Stream
An interesting solution I came up with was one that I had seen in other libraries and ecosystems: Passing context to the callback.
When considering passing callbacks in react, a simple use case typically only passes an event object.
const onClick (e) => e.preventDefault();However, if I need access to the internal API of our third party library, I can pass that up through the callback as well. Even batter, I can abstract most of the internal workings of the library with a wrapper function. Take a look at the example:
const BarWidgetContainer = (props) => {
const onClickBar = (segment, graphContext) => {
graphContext.renderToolTip(segment);
};
return (
<BarGraph
onClickBar={onClickBar}
/>
);
};Here, the renderToolTip function likely has a great deal of logic specific to the library I'm interfacing with. At a higher level, though, I don't have to worry about that at all. I can simply call for that functionality if needed from a level of abstraction.
Use Cases
As mentioned, the abstraction is great for providing flexibility without complexity. Consumers of the component can interface with the internals without getting into the weeds of it.
A major con, though, is the added decoupling. In most cases, this could be seen as an anti-pattern in React given the uni-directional flow that's preferred in the ecosystem.
Considering these points, we ultimate decided on another solution that allowed for the parent to child data flow. It makes the most sense for our situation since it keeps our code cleaner. Realistically, we're also only using this component in a hand full of use cases.
But why did I write this up, then? I'm keeping the pattern in my back pocket. Situations where I can see this being useful are in broader use cases. Say that instead of our internal React component, this were part of a larger library consumed by more users. The trade off of coupling for abstraction and flexibility might make sense in a more widely used library. That's likely why it's a frequent pattern in open source tools, after all.
It was a fun experiment this week! Saving this pattern for another time.
Satin Doll
Trying out stride piano! All without looking at my hands π
Deep Sea Scene
Coordinating Multiple Serverless Functions on AWS Lambda
Sharing a bit of initial research for a serverless project I'm working on.
I've been wrestling this week with a challenge to coordinate a cron job on AWS Lambda. The single script running on Lambda is ideal. There's no server overhead, the service is isolated and easy to debug, and it's cost effective. The challenge, though, is how to scale the process when the run time increases.
Lambda's runtime limit at this moment is 15 minutes. Fairly generous, but it is still a limiter. My process will involve web scraping, which if done sequentially, could easily eat those 15 minutes if I have several processes to run.
The process looks something like this so far:
Simple when it's 3 or 5 pages to crawl for a minute each. But an issue when that scales up. Not to mention the inefficiency of waiting for all processes to end before sending results from the crawl.
This would be a great opportunity to lean on the power of the microservice structure by switching to concurrent tasks. The crawls are already able to be sequentially called, the change would be figuring out how to send the notification after the crawl completes.
To do this, each of those steps above can be broken up into their own separate lambda functions.
Once they're divided into their own serverless functions, the challenge is too coordinate them.
Self Orchestration
One option here would be to adjust my functions to pass state between functions. Above, the 1st lambda would grab the list of pages, and fire off all instances of the 2nd lambda for each page. The crawler could receive an option to notify and update the db.
It's a fine use case for it! With only three steps, it wouldn't be overly complicated.
To call a lambda function asynchronously, it simply needs to be marked as an "Event" type.
import boto3
lambda_client = boto3.client('lambda', region_name='us-east-2')
page = { ... }
lambda_client.invoke(
FunctionName=f'api-crawler',
InvocationType="Event",
Payload=json.dumps(page=page)
)Step Functions
Say it were more complicated, though! 3 more steps, or needing to handle fail conditions!
Another option I explored was using Step Functions and a state machine approach. AWS allows for the ability to orchestrate multiple lambda functions in a variety of patterns such as Chaining, branching, and parallelism. "Dynamic parallelism" is a patern that would suit my case here. Though, I may not necessarily need the primary benefit of shared and persistent state.
The Plan
For this use case, I'm leaning towards the self orchestration. The level of state being passed is not overly complex: A list of pages from step 1 to 2, and then a result of success or failure from step 2 to 3. The process has resources in place to log errors at each step, and there's no need to correct at any part of the step.
Next step is the implementation. To be continued!





