Scorecard

My goal: Develop digital fluency to significantly improve my employer's operational effectiveness, and to turn disparate, unstructured, and hard-to-find data into actionable intelligence.

Below is my self-assessment of my level of competency with various tools, and where I need to improve.

Level Guide (click to expand)

Level 1 — Awareness: Just getting started, concepts still fuzzy.

Level 2 — Basic Operator: Can use independently for routine/basic tasks (pre-junior level).

Level 3 — Applied Practitioner: Can audit, integrate with related tools, and troubleshoot simple issues (pre-junior level).

Level 4 — Junior Engineer: Can teach, build with the tool, and solve problems in organizational contexts.

Level 5 — Senior Engineer: Tool is second nature; can optimize, debug, innovate, and mentor others.

Python

I use Python for data cleaning, ETL pipelines, connecting to Google Drive and other data sources, and automating repetitive tasks. I also apply it in exploratory projects like building a CPU emulator and packaging my own utility libraries.

Inventory (click to expand)

Current Comfort Level

  • Structure Python projects optimized for scalability and collaboration.
  • Write clean, maintainable Python code.
  • Write programs with error handling, event logging, and environment-independent execution.
  • Design reusable Python modules with init.py.
  • Safeguard secrets using .gitignore and environmental variables.
  • Maintain auditability using logging.
  • Perform ETL workflows.
  • Transform and analyze tabular data using Python data structures.
  • Design and manage lightweight relational databases using sqlite3.
  • Perform basic quality control using pytest and ruff.
  • Set up virtual environments and manage remote and editable packages using pip.
  • Build and install Python APIs.
  • Connect to Google Drive using Google's API.
  • Use Git/GitHub for version control and collaboration.
  • Immediate Next Steps (1–2 months)

    • Become fluent with key data engineering tools such as numpy and pandas, pathlib, requests, selenium and scrapy.
    • Gain competency with tooling and code quality packages such as sys, dis, and mypy.
    • Strengthen professional practices by structuring ETL as reusable functions and pipelines, and integrating programs with terminal shells.

InfoSec

I implement best infosec practices in everything I do — from securing credentials and systems to protecting the privacy of the people whose data I handle.

Inventory (click to expand)

Current Comfort Level

  • Apply core infosec principles across systems and data workflows.
  • Manage and track IAM, assign Role-Based Access Controls, and enforce the Principle of Least Privilege.
  • Map and document data flows throughout the data lifecycle.
  • Use data services that provide encryption in transit and at rest.
  • Maintain awareness of privacy-by-design and data minimization practices.

Next Steps

  • Establish expertise to educate and collaborate with teams on secure data and device handling.
  • Learn and practice tools for auditing and testing vulnerabilities (e.g., OWASP ZAP, Nessus, Azure Security Center).
  • Develop confidence in documenting and reporting security posture and incidents.
  • Implement logging and monitoring to support audit trails and anomaly detection.

Git/GitHub

I use Git/GitHub to version-control my projects, manage branches, share my work, and collaborate (with just myself currently). It allows me to track changes, experiment safely, and document my learning journey across repositories.

Inventory (click to expand)

Current Comfort Level

  • Recognize Git as a system of record, where every change is traceable.
  • Undertsand how teams use Git and Github to keep codebases compatible and workflows aligned.
  • Use Git to track changes and maintain a remote repository over time.
  • Maximize clarity and efficiency by following standard naming conventions and documentation for branches, commits, and repositories.
  • Create deliberate version history and protect work with consistent use of add, commit, and push.
  • Manage pull requests and merges via terminal (individual and collaborative projects).
  • Create, work with, and merge experimental branches on solo projects and individual branches on collaborative projects.
  • Merge branches using both merge and rebase.
  • Diagnose and fix merge/rebase conflicts.
  • Protect origin/main from direct or unauthorized pushes through branch protections.
  • Explore version history, branches, and commits using status, log, diff, and show.
  • Access and recover prior versions using stash, cherry-pick, restore, and reset.
  • Set up a GitHub repository and connect to my local project.
  • Connect a local machine to GitHub using SSH.

Immediate Next Steps (1–2 months)

  • Improve comfort with resolving merge conflicts independently.
  • Practice advanced history rewriting with rebase -i (interactive rebase).

PowerShell

I use PowerShell to navigate and operate on directories and files, and to write bootstrap programs to get projects off the ground quickly and uniformly.

Inventory (click to expand)

Current Comfort Level

  • Navigate directories.
  • Run commands with conditional logic and ForEach-Object loops.
  • Build .ps1 scripts to automate and standardize project setup.

Immediate Next Steps (1–2 months)

  • Strengthen fluency with core PowerShell commands and syntax to navigate, query, and manage directories and files more efficiently in terminal.

Databases

I use SQLite to persist data and test lightweight pipelines, and I am expanding toward more advanced relational database work.

Inventory (click to expand)

Current Comfort Level

  • Understand core database design principles (e.g. tables, relationships, normalization, granularity, primary and foreign keys)
  • Working knowledge of core SQL syntax.
  • Practical experience using sqlite3 in Python projects for structured data storage and retrieval.

Immediate Next Steps (1–2 months)

  • Build fluency by using SQL in everyday work.
  • Practice designing small schemas to support ETL pipelines.

Google Cloud Platform

I use Google Cloud Console to turn my IDE + Python into a shell terminal with access to our team's data in the cloud.

Inventory (click to expand)

Current Comfort Level

  • Understanding of core cloud concepts such as availability and redundancy.
  • Navigating GCP architecture and creating Organizations, Projects and Services.
  • Configuring billing and safeguards to stay within budget.
  • Enabling APIs and creating OAuth and Service Account credentials.

Where I'm headed

Build fluency with Cloud Storage, Cloud SQL and BigQuery to conduct large-scale data projects.