2024-01-16 9:30pm ADT
Went on a little break for playing video games, and today it was Delver. So far I have:
(Not only blogging on technical stuff - tossing the odd other thing onto here. This is the closest to social media I'll use.)
Good experience, will play again at some point to try to get to some sort of conclusion.
2024-01-15 9:30pm ADT
Finally like a real blog again (last time I ran one was in 2010 or so) so I added a comments section. Took a bit to setup, and uses a selfhosted version of Commento.
Anonymous comments are enabled and manually approved, so go nuts within reason.
It's quite neat I can add this while still doing a static site setup for the actual content here (static-ish - NodeJS packaged site that I commit via git to and it runs a pipeline to publish.) Pretty happy about that since the pageload speed is awesome and I have some flexibility, and can toss into a k8s cluster!
2025-01-15 6:15pm ADT
Lots of little bits of digging I had to do to get Vault working with Github Actions. But, quick summary of how I got it working, using wmb as an example (my webhook-to-IRC bot).
Inside a job, you can add this:
- name: Retrieve wmb info from vault
id: import-secrets-wmb
uses: hashicorp/vault-action@v3.1.0
with:
url: ${{ secrets.VAULT_ADDR }}
method: approle
roleId: ${{ secrets.VAULT_ROLE_ID }}
secretId: ${{ secrets.VAULT_SECRET_ID }}
secrets: |
kv/data/pipeline/wmb WMB_URL ;
kv/data/pipeline/wmb WMB_PASSWORD
exportEnv: true
Then, access the secrets in the job like this:
- name: Notify IRC on Success
run: |
export COMMIT_MSG=$(git log -1 --pretty=%B)
export MESSAGE="Build and push of ghcr.io/${{ github.repository }}:staging completed with commit message: $COMMIT_MSG. See https://github.com/${{ github.repository }}/actions/runs/${{ github.run_id }}"
curl -X POST -H "Content-Type: application/json" -d "{\"message\": \"$MESSAGE\", \"password\": \"${{ steps.import-secrets-wmb.outputs.WMB_PASSWORD }}\", \"colourcode\": 3}" ${{ steps.import-secrets-wmb.outputs.WMB_URL }}
if: success()
- name: Notify IRC on Failure
run: |
export COMMIT_MSG=$(git log -1 --pretty=%B)
export MESSAGE="Build and push of ghcr.io/${{ github.repository }}:staging failed with commit message: $COMMIT_MSG. See https://github.com/${{ github.repository }}/actions/runs/${{ github.run_id }}"
curl -X POST -H "Content-Type: application/json" -d "{\"message\": \"$MESSAGE\", \"password\": \"${{ steps.import-secrets-wmb.outputs.WMB_PASSWORD }}\", \"colourcode\": 4}" ${{ steps.import-secrets-wmb.outputs.WMB_URL }}
if: failure()
So in summary for accessing the secrets, put in {{ steps.import-secrets-wmb.outputs.WMB_URL }}
and {{ steps.import-secrets-wmb.outputs.WMB_PASSWORD }}
to access inside other steps of the job.
The biggest thing I noted was needing to add the ;
and have each secret on a newline when accessing multiple secrets, specifically
secrets: |
kv/data/pipeline/wmb WMB_URL ;
kv/data/pipeline/wmb WMB_PASSWORD
And now just set the VAULT_ADDR, VAULT_ROLE_ID, and VAULT_SECRET_ID in the repo secrets for GH actions, and you're good to go.
2025-01-15 9:00am ADT
Lots of misguided information out there right now on how to make Admob work with Expo (as of January 2025, to be specific. Expo 52). And all the LLMs will run you around in a circle.
For one, don't use the expo-ads-admob
package - it's deprecated on the latest expo version.
You also do NOT need to eject from expo to use Admob, which is also a deprecated way of doing things. You do however, have to run on Android/iOS vs using Expo go over wifi - which sorts sucks, but a quick USB-C cable to your phone and you're good to go.
To start, add this to your app.json file:
{
"expo": {
...
"plugins": [
"expo-router",
...
[
"react-native-google-mobile-ads",
{
"androidAppId": "ca-app-pub-xxxx",
"iosAppId": "ca-app-pub-xxx"
}
]
],
...
Next, npx expo install react-native-google-mobile-ads
to install the required package.
Finally, here's a sample component to use for a small banner ad:
import React from 'react';
import { StyleSheet, View } from 'react-native';
import { BannerAd, BannerAdSize, TestIds } from 'react-native-google-mobile-ads';
interface SmallBannerAdProps {
adUnitId?: string; // Optional: Fallback to TestIds.BANNER if not provided
}
const SmallBannerAd: React.FC<SmallBannerAdProps> = ({ adUnitId }) => {
const adUnit = adUnitId || TestIds.BANNER; // Use TestIds.BANNER for testing
return (
<View style={styles.adContainer}>
<BannerAd
unitId={adUnit}
size={BannerAdSize.ANCHORED_ADAPTIVE_BANNER}
/>
</View>
);
};
const styles = StyleSheet.create({
adContainer: {
width: '100%',
marginTop: 20,
justifyContent: 'center',
alignItems: 'center',
},
});
export default SmallBannerAd;
With this, you can just add <SmallBannerAd />
to your screen and it will show a small banner ad using a test ad unit, so you won't break the terms of service by accidently displaying a real ad.
Finlly, run npx expo prebuild && npx expo run:android
to build and run your app (providing it's connected via USB and setup for development) and it'll build & launch. Navigate as required to see the ad, and voila. At this point you can add in your ads as per the react-native-google-mobile-ads docs and you'll be good to go, and you didn't have to get rid of all your expo stuff.
2025-01-14 10:00pm ADT
Had quite the time while trying to make an API to give some GeoJSON polygon data into a react native MapView. Just due to the calls it was melting and locking up pods in k8s, but the postgres database was not really taking a beating, which was weird.
Little bit of digging with logging and I was able to see assembling the datapoints for the polygon into structs was destroying the performance somewhere inside. Little more digging confirmed this with a pprof run - 30% of our execution time with only one request was spent on memory allocations, particularly where we were assembling the features that may have arrays of lat, longs as large as a few hundred points.
So, a sanitized snippet of the old code:
var features []models.GeoJSONFeature
for rows.Next() {
var result models.Response
if err := rows.Scan(&result.ID, &result.ObjectID, &result.Holder, &result.ShapeArea, &result.ShapeLen, &result.Geom); err != nil {
c.JSON(http.StatusInternalServerError, gin.H{"error": "Failed to scan row"})
fmt.Println(err)
return
}
// Create a GeoJSON feature for each result
feature := models.GeoJSONFeature{
Type: "Feature",
Geometry: json.RawMessage(result.Geom), // Use the GeoJSON geometry directly
ID: result.ID,
}
features = append(features, feature)
}
// Construct the GeoJSON response
geoJSONResponse := models.GeoJSONResponse{
Type: "FeatureCollection",
Features: features,
}
This part right here in particular is the murder scene
// Create a GeoJSON feature for each result
feature := models.GeoJSONFeature{
Type: "Feature",
Geometry: json.RawMessage(result.Geom), // Use the GeoJSON geometry directly
ID: result.ID,
}
features = append(features, feature)
In a nutshell, what is going on here is that we are creating and destroying a GeoJSONFeature struct for each row in the database, then appending it to the slice, which generates a copy. Then, our garbage collector has to delete our created struct after each iteration right away. It's quite agressive.
So here comes in sync.Pool to save the day.
var geoJSONFeaturePool = sync.Pool{
New: func() interface{} {
return &models.GeoJSONFeature{}
},
}
var features []models.GeoJSONFeature // Slice to hold GeoJSON features
var pooledFeatures []*models.GeoJSONFeature // Track features to return to the pool
for rows.Next() {
var result models.Response
if err := rows.Scan(&result.ID, &result.ObjectID, &result.Holder, &result.ShapeArea, &result.ShapeLen, &result.Geom); err != nil {
c.JSON(http.StatusInternalServerError, gin.H{"error": "Failed to scan row"})
fmt.Println(err)
return
}
// Get a GeoJSONFeature from the pool
feature := geoJSONFeaturePool.Get().(*models.GeoJSONFeature)
feature.Type = "Feature"
feature.Geometry = json.RawMessage(result.Geom)
feature.ID = result.ID
// Append the feature to the slice
features = append(features, *feature)
// Track the feature to return it to the pool later
pooledFeatures = append(pooledFeatures, feature)
}
// Return all pooled features to the pool after use
for _, feature := range pooledFeatures {
geoJSONFeaturePool.Put(feature)
}
// Construct the GeoJSON response
geoJSONResponse := models.GeoJSONResponse{
Type: "FeatureCollection",
Features: features,
}
This change pretty much immediately made the performance of the API. It went from using up to a gig of ram for three pods with just a handful of requests, to using up to a max of about 50mb of RAM for some requests with a few megabytes of data.
But how does it work? The gist of it is that we are creating a pool of objects, and then we are reusing them instead of creating new ones. This way, we are reducing the amount of memory allocations and deallocations. So the garbage collector does not need to create a several hundred point array, copy it, then destroy it, then do it over and over again - it instead has objects it can re-use, that it can simply just zero out the data in and not have to mallock a new one every single time.