Chris Padilla/Blog
My passion project! Posts spanning music, art, software, books, and more. Equal parts journal, sketchbook, mixtape, dev diary, and commonplace book.
- Optimize my images locally (something Cloudinary already automates, but I do by hand for...fun?!)
- Open up the Cloudinary portal
- Navigate to the right directory
- Upload the image
- Copy the url
- Paste the image into my markdown file
- Optionally add optimization tag if needed
Automating Image Uploads to Cloudinary with Python
There's nothing quite like the joy of automating something that you do over and over again.
This week I wrote a python script to make my life easier with image uploads for this blog. The old routine:
I can eliminate most of those steps with a handy script. Here's what I whipped up, with some boilerplate provided by the Cloudinary SDK quick start guide:
from dotenv import load_dotenv
load_dotenv()
import cloudinary
import cloudinary.uploader
import cloudinary.api
import pyperclip
config = cloudinary.config(secure=True)
print("****1. Set up and configure the SDK:****\nCredentials: ", config.cloud_name, config.api_key, "\n")
print("Image to upload:")
input1 = input()
input1 = input1.replace("'", "").strip()
print("Where is this going? (Art default)")
options = [
"/chrisdpadilla/blog/art",
"/chrisdpadilla/blog/images",
"/chrisdpadilla/albums",
]
folder = options[0]
for i, option in enumerate(options):
print(f'{i+1} {option}')
selected_number_input = input()
if not selected_number_input:
selected_number_input = 1
selected_number = int(selected_number_input) - 1
if selected_number <= len(options):
folder = options[selected_number]
res = cloudinary.uploader.upload(input1, unique_filename = False, overwrite=True, folder=folder)
if res.get('url', ''):
pyperclip.copy(res['url'])
print('Uploaded! Url Coppied to clipboard:')
print(res['url'])
Now, when I run this script in the command line, I can drag an image in, the script will ask where to save the file, and then automatically copy the url to my clipboard. Magic! ✨
A couple of steps broken down:
Folders
I keep different folders for organization. Album art is in one. Blog Images in another. Art in yet another. So first, I select which one I'm looking for:
print("Where is this going? (Art default)")
options = [
"/chrisdpadilla/blog/art",
"/chrisdpadilla/blog/images",
"/chrisdpadilla/albums",
]
folder = options[0]
for i, option in enumerate(options):
print(f'{i+1} {option}')
selected_number_input = input()
and later on, that's passed to the cloudinary API as a folder:
if not selected_number_input:
selected_number_input = 1
selected_number = int(selected_number_input) - 1
if selected_number <= len(options):
folder = options[selected_number]
res = cloudinary.uploader.upload(input1, unique_filename = False, overwrite=True, folder=folder)
Copying to clipboard
Definitely the handiest, and it's just a quick install to get it. I'm using pyperclip to make it happen with this one liner:
if res.get('url', ''):
pyperclip.copy(res['url'])
Clementi - Sonatina in F Maj Exposition
Note to self: don't wait until a couple of weeks after practicing something to record 😅
Blue Hair, Don't Care
Next Auth Custom Session Data
I've been tinkering with Next Auth lately, getting familiar with the new App Router and React Server Components. Both have made for a big paradigm shift, and a really exciting one at that!
With all the brand new tech, and with many people hard at work on Next Auth to integrate with all of the new hotness, there's still a bit of transition going on. For me, I found I had to do a bit more digging to really setup Next Auth in my project, so here are some of the holes that ended up getting filled:
Getting User Data from DB through JWT Strategy
When you use a database adapter, Next auth automates saving and update user data. When migrating an existing app and db to Next auth, you'll likely want to handle the db interactions yourself to fit your current implementation.
Here's what the authOptions looked like for an OAuth provider:
export const authOptions = {
// adapter: MongoDBAdapter(db),
providers: [
GithubProvider({
clientId: process.env.GITHUB_ID,
clientSecret: process.env.GITHUB_SECRET,
session: {
jwt: true,
maxAge: 30 * 24 * 60 * 60,
},
}),
],
secret: process.env.NEXTAUTH_SECRET,
};
Notice that I'm leaving the adapter above out and using the jwt strategy here.
There's a bit of extra work to be done here. The session will save the OAuth data and send it along with the token. But, more than likely, you'll have your own information about the user that you'd like to send, such as roles within your own application.
To do that, we need to add a callbacks object to the authOptions with a jwt
and session
methods:
async jwt({token, user}) {
if(user) {
token.user = user;
const {roles} = await db.users.findOne(query)
token.roles = roles;
}
return token;
},
async session({session, token}) {
if(token.roles) {
session.roles = token.roles;
}
return session;
},
So there's a bit of hot-potato going on. On initial sign in, we'll get the OAuth user data, and then reference our db to find the appropriate user. From there, we pass that to the token, which is then extracted into the session later on.
Once that's set, you'll want to pass these authOptions
in every time you call getServerSession
so that these callbacks are used to grab the dbUser
field. Here's an example in a server action:
import React from 'react';
import {getServerSession} from 'next-auth';
import { authOptions } from '@api/[...nextauth]/route';
import Button from './Button';
export default async function ServerActionPage() {
const printName = async () => {
'use server';
const session = await getServerSession(authOptions);
console.log(session);
return session?.user?.name || 'Not Logged In';
};
return (
<div className="m-20">
<Button action={printName} />
</div>
);
}
When that's logged, we'll get the OAuth user info and the roles we passed in from our db:
{
user: {...}
roles: [...]
}
Just Friends
Lovers no more~
Trombone Gesture
Brunner - Rondoletto
Biiiiig finger twister! 🔀
WHO has the smoothest moves?
Structs in Go
There are two ways of creating datatypes similar to JavaScript Objects and Python Dictionaries in Go: Structs and Maps.
Structs are a collection of data that are related. Values are stored next to each other in memory. Structs are also a value type.
Maps are a hash map data type. They are a key value pair where both keys and values are statically typed individually. So all keys need to be of the same type, and all values need to be the same type. The main benefit is that, as a hash map, indexing and look up is much faster.
Let's break all that down:
Values are stored next to each other assuming the value will be lightweight. In this way, it's similar to an array where the keys are strings. Though, the values are not indexed the way that a hash map would for its keys. The tradeoff is that the value is lighter on memory, but slower to iterate through.
Structs are a value type. So if we were to pass them into a function, the entire struct would be copied. Maps, on the other hand, are a reference type. The address in memory for the Map is passed into a function and any changes to the map within the function will occur as a side effect to the same Map.
Structs
Declaring structs requires a type to be created first:
type car struct {
make string
model string
maxSpeed int
}
c := car{make: "Toyota", model: "Prius", maxSpeed: 120}
Methods
Go isn't an Object Oriented Language, but like JavaScript, can be implemented with similar principles. An example is having methods on Structs:
type car struct {
make string
model string
maxSpeed int
}
func (c car) floorIt() int {
return c.maxSpeed
}
c := car{make: "Toyota", model: "Prius", maxSpeed: 120}
c.floorIt() // 120
Embedding
Another OOP principle borrowed in Go is composition. In Go, we can embed structs to create more complex types while still maintaining the flexibility of smaller pieces available as individual structs.
type car struct {
make string
model string
maxSpeed int
}
type raceCar struct {
car
turboEngine string
}
rc := raceCar{
car: car{
make: "Toyota",
model: "Prius",
maxSpeed: 120,
},
turboEngine: "MAX"
}
Go Performance
Performance and Memory
When looking at a language's performance, the two considerations here are memory usage and performance speed.
Taking two ends of the spectrum, we could look at Rust on one end and C# on the other.
C# is a high level language that requires interpreting through a Virtual Machine. (A strange example, perhaps, because C# does compile, but only to the Intermediary Language, and not directly to machine code) C# Also handles memory management. The overhead of the virtual machine and memory management leads to a fluid developer experience, but makes compromises in performance speed and memory usage.
Rust, on the other hand, is a compiled language. Meaning that to run rust, you'll build an executable from the human readable Rust code. This saves the time it would take a virtual machine to interpret the language. Or, in the case of Python or Ruby, it would eliminate the time it takes for the runtime to interpret the script.
Rust also requires the developer to do much of their own memory management. When written properly, this allows for really performant applications, since that's an extra bit of overhead taken off from running Rust code.
Where Go Fits
Go uniquely sits an a great position between these two ends to balance the benefits offered by a higher level language while still providing the speed of a compiled language.
Go does compile to machine code. You can run go build main.go
to compile the script down to an exe file. So we get the benefit of quick execution, eliminating the need for interpretation time.
While doing so, Go bundles a much lighter package called the Go Runtime that handles Garbage Collection. With a specialized focus on memory management, this still allows for that DX experience while not adding as much overhead as the Java Runtime Environment or the Common Language Runtime in C#.
Comparisons of Go's speed are right between the end of compiled, non-Garbage Collected languages like C, C++, and Rust, and the higher level language features of Java and C#.
One added benefit of being compiled is having one less dependency in your deployment environment. The platform isn't required to have a specific version of a Go interpreter available to execute a program.
Parkening - A Minor Study
Lucy can tell when I'm about to finish recording, she knows rubs are soon to follow!
Halloween!
Stateless Sessions With Cookies
I'm diving into a large research project around authentication, so get ready for many a blog about it!
This week, an approach to handling email/password login.
Authentication
Authentication is simply verifying someone's identity. Different from authorization, which deals with roles and permissions, or if a user can perform certain actions within your application. Authentication is logging someone in, where authorization is verifying they have access to, say, an admin page or editing functionality.
Email and password is the most ubiquitous approach for authentication. And implementing it only takes a few components.
Storage and Encryption
For a custom solution, email and password combinations can be stored on the DB along with a user's profile. When doing this, password encryption is a vital ingredient in the event of a data leak.
bcrypt is a tried and tested solution here. From their documentation, hashing and checking the password are simple function calls:
// encrypt
bcrypt.genSalt(saltRounds, function(err, salt) {
bcrypt.hash(myPlaintextPassword, salt, function(err, hash) {
// Store hash in your password DB.
});
});
// Load hash from your password DB.
bcrypt.compare(myPlaintextPassword, hash, function(err, result) {
// result == true
});
saltRounds
, if that stands out to you, is the number of iterations of random strings included in the hashing process.
Http and Encryption
All fine and well once the password gets here, but what about when it's being sent to the server? HTTP is simply a plain text protocol. Were it to be intercepted by a malicious party, the email and password combo can be used maliciously.
From the client, we can encrypt with the SHA-256 algorithm, and then decode it on the server.
Here's a client example from MDN:
const text =
"An obscure body in the S-K System, your majesty. The inhabitants refer to it as the planet Earth.";
async function digestMessage(message) {
const msgUint8 = new TextEncoder().encode(message); // encode as (utf-8) Uint8Array
const hashBuffer = await crypto.subtle.digest("SHA-256", msgUint8); // hash the message
const hashArray = Array.from(new Uint8Array(hashBuffer)); // convert buffer to byte array
const hashHex = hashArray
.map((b) => b.toString(16).padStart(2, "0"))
.join(""); // convert bytes to hex string
return hashHex;
}
digestMessage(text).then((digestHex) => console.log(digestHex));
And then on a Node Server, the built in Crypto library can decrypt the password.
Sustaining Session
Great! A user is signed in on the homepage. But, once they navigate to another, how do we maintain that logged in state?
I've actually written on two different approaches before: JWT's and Session Storage. Here I'll talk a bit about server sessions and then focus on a twist on the JWT pattern:
A classic approach is to maintain session data on the server. Once a user is authenticated, a cookie is then sent to be stored on the client browser. That cookie comes along for the ride on every request back to the server with no extra overhead (unlike, say, local storage, which would require writing some logic.) With an authentication token stored on the cookie, the server can verify the token and then confirm that it's from the logged in user.
A nice approach for many reasons! If needed, an admin can manually log the user out if there's suspicious activity with an account. Cookies are also a lightweight and easy to implement technology built into the browser.
One drawback is that the session is tied to the specific server. There's added complexity here in a micro service environment. Maintaining that state may also slow the server down with the added overhead.
Another take on this approach is how Ruby on Rails and the package iron-session still makes use of cookies, but with a "stateless" session from the server.
From the Ruby on Rails guide, the idea is that session IDs are replaced with a session hash that can only be decrypted and validated by your server. In this case, it's the client keeping track of their own session, while the server is simply responsible for approving the token. Decrypted, the cookie may contain basic client info:
{user: {id: 100}}
(A note to still avoid PII (personally identifiable information) or storing passwords here!)
This is similar to using JWT's as authentication tokens. A benefit to using a package like iron session here, though, is that the session cookie comes with encrypted data from a non-spec'd algorithm. JWT, however, is a standard. Unless you encrypt it yourself, it's easy for anyone to decrypt your JWT.
Cinnamon Triads
A work in progress! Hoping to get it out before we skip right over to Winter in Texas 🍂