I designed and developed arkis.io as a landing page for the recently-founded software development group Arkis, which consists of myself and several of the other students with whom I took AP Computer Science in the 2016-2017 school year.
The first thing users see when entering the website is a CSS-powered "intro animation." This consists of several CSS keyframe animations playing in series in order to provide a compelling introduction to the site. The animation ensures that users are engaged and pulled into the site from the beginning.
The site also includes more sophisticated JS-powered animations. At the bottom of the page, immediately after the intro animation plays, is a "mist" animation. This creates a sense of atmosphere and also provides a smooth boundary between the first and second major sections of content on the page while scrolling. The effect is achieved via a basic particle simulation, where each of the particles is rendered as a small "cloud" texture. When the particles move and rotate past each other, a convincing effect of dynamic fog is achieved.
Hover over the animation to reveal markers indicating the position and rotation of each moving particle.
Apart from the complex front-end elements, arkis.io is supported by a full back-end hosted on Now. The back-end caches and serves twitter profile images for each of the members listed on the site, and powers the contact form, which allows visitors to submit messages which land as emails in my personal inbox. The animation to the right is shown as an intermediate state before the back-end has confirmed successfully sending a user's message.
livejson is an open-source Python library I wrote which enables users to easily
manipulate JSON files on-disk as though they were in-memory Python
a pure-python library with no dependencies, written for compatibility across Python 2 and
Python 3. The animation below shows livejson in action; the user is modifying an in-memory
object in the Python REPL and the changes made in memory can be seen reflected in the JSON
file, displayed behind.
My primary goal in developing the
livejson library was ease of use. The "live"
dicts that the library provides are virtually
indistinguishable from their native Python eqivalents, so they can be used as drop-in
replacements in almost all cases. Because it's a public library, it was important to me that
the library's reliability was guaranteed, so every line of its code is covered by unit tests.
The default behavior of the library is to write to disk for each time the in-memory object is mutated, but for cases of many consecutive writes, a Python context manager can be used to group multiple modifications into a single file write.
f = livejson.File("test.json")
f["a"] = "b"
f["c"] = "d"
with livejson.File("test.json") as f:
f["a"] = "b"
f["c"] = "d"
Wikipedia map is a web app for visualizing the connections between Wikipedia articles. The project scrapes Wikipedia and uses the links between pages that are present in each article as an indicator of topics that might be connected.
After simply entering any topic for which a Wikipedia article exists, users can begin to
explore connected topics via an interactive navigable graph (rendered on a
using vis.js) which displays all of the
articles which the user has "discovered." After a while exploring the map, a large graph of
hundreds of connected topics is formed.
The main interface to Wikipedia Map is its graph. The graph presents each article as a round blue bubble, a "node," and each node can be clicked to expand into multiple new "connected" nodes for related articles.
Each node expansion requires a request to the Wikipedia Map back-end. The project originally featured a Python/Flask back-end, but for increased maintainability I've since migrated the backend to a Node.js/Express app hosted on Now. The back-end is responsible for all of the logic of fetching, parsing, and processing each Wikipedia article's HTML in order to extract the links.
Only the links from the first paragraph of each Wikipedia article are included, because the first paragraph links tend to be the links to topics most revlevant to the article. For example, the image to the right highlights the links that Wikipedia map selects from the first paragraph of the "Cat" article.
This also has the benefit of preventing long articles from expanding with a ridiculous and confusing amount of information.
Wikipedia Map's main goal is to present relationships between topics in a digestible format. Several subtle aspects of the web app help to achieve this goal. First, nodes are lighter in color when they are farther away from the central node.
For example, if the user started with "Penguin" and it took 5 expansions to reach "Ancient Greek", "Ancient Greek" would be a lighter color than a node like "Birding," which only took 2 steps to reach. This indicates that "Birding" is a topic more closely connected with "Penguin" than "Ancient Greek."
Second, when the user hovers over any node in the graph, the path that the user took to reach that article is highlighted. This helps users visualize exactly how, step by step, two topics on the graph are connected, which is useful for breaking down the connections between distantly-related articles.
maze-cv is a proof-of-concept tool that runs on iOS devices capable of solving
mazes plotted on special graph paper, with minimal user input. The tool is built on the
fantastic Pythonista platform for Python
development on iOS.
maze-cv to learn about basic image manipulation and computer vision
algorithms, so I implemented all of the logic involved myself, without borrowing from
high-level libraries like OpenCV, but instead by reading
image values pixel-by-pixel using the Python Imaging Library.
maze-cv was designed more as a proof-of-concept than a practical tool, it
does impose several restrictions on the user. Mazes must be plotted on a 16x16 graph paper
grid between four clearly marked red squares. This makes the task of interpreting the maze
much easier for the program, because the location of the maze is easily identifiable and the
size of the grid is known.
The first step that
maze-cv takes is to identify the corners of the maze by
finding the red markers. Each pixel in the image is searched and its hue and saturation are
inspected in order to determine which pixels are "red." Then, a breadth-first-search
algorithm is used to connect adjacent red pixels. The image to the left shows a test in
which the center of each identified red object in an image is marked with a small black
The next step that
maze-cv goes through is to transform the maze to get a
straight-on view of it, to eliminate the variable of perspective. This allows the program to
view the 16x16 grid as a grid and to easily segment each piece of the maze later.
The animation to the right illustrates the process of perspective transformation that
maze-cv uses. Hover the animation to highlight the box being transformed.
The third step that
maze-cv performs on the image is to threshold the image.
Using adaptive thresholding
to avoid errors caused by shadows on the page, each pixel in the transformed image is
determined to be either light or dark, resulting in an image like the one to the left.
This image can now be easily segmented to determine the value of each square in the 16x16 grid.
After being transformed and thresholded, the image is segmented. The grid of pixels is split
into 256 chunks of equal size, one for each square on the 16x16 grid. Within each segment,
the value of each thresholded pixel (
1) is averaged. A
weighted average is used such that pixels near the center of the segment are counted more
than those near the edges; this helps correct for cases in which the grid isn't quite lined
up after transformation. A weighted average of less than
0.5 (more dark than
light pixels) is considered to be a wall (black square), while a weighted average greater
0.5 is considered to be traversible path (a white square). By repeating
this calculation with each square, a pixel map like the one to the right is generated.
The final step, after the image is segmented, is to solve the maze. A graph is constructed, in which all white squares are nodes. Using breadth-first-search on the data from the previous step, adjacent white squares are connected by edges of equal weight in the graph. Then, Dijkstra's algorithm is used to solve (the A* algorithm doesn't bring sufficient performance improvements to mazes of this size to justify its increased complexity of implementation).
For now, the user has to manually mark the starting and ending points on the maze, but this could change in a future version.
Image2ASCII is a Python library I wrote which is capable of approximating images with text, creating ASCII art from any source. To make the tool more accessible, I also wrapped the library in a simple app using Pythonista.
Image2ASCII generates artwork by looking at the source image and matching brightness levels
in different parts of the image with characters that have an approximately appropriate
"visual weight" on the page. Characters that fill more of their area with black, like
$ are considered to have a high visual weight, while characters like
', which leave most of the space they occupy blank, are considered to have
low visual weight.
In order to apply this principle, the first step that the program takes is to generate a map between characters and their visual weights. The program renders each ASCII glyph onto a canvas the size of one character, and computes the average brightness of the resulting image. Lower brightnesses indicate characters with more visual weight, because more of the image is black.
After building the map between each character and its visual weight, image2ASCII assigns characters to different portions of the image. The image is converted to black and white, its contrast increased to improve clarity of the ASCII art, and then the image is downscaled to a size where each pixel could be reasonably represented by a single character. Then, pixel-by-pixel the character with the visual weight most similar to the brightness of the pixel in question is selected and added to the output string. The result is the entire artwork represented as text.
Image2ASCII provides functions for rendering artwork onto an image for display, but the example to the left uses the raw text output and dynamically renders it onto a canvas.
ui2 is a Python module for the iOS app Pythonista
which builds on top of Pythonista's
ui module to add functionality and
ui2 empowers script authors to take advantage of a much wider
range of components, allowing them to create more complex, beautiful, and effective
interfaces. The project was commissioned by another community member, who thought that I
had the necessary skill to carry out the project and paid me to complete it.
ui2 lets users access UI components like
that are not available through the original
ui module, and also provides
pure-python abstractions of
ui features to add flexibility and convenience,
such as allowing the chaining of multiple animations.
One of the most significant improvements that
ui2 makes over
the abundance of new Python classes wrapping
UIKit views that were previously
inaccessible from Pythonista. These include:
BlurView, which wraps
UIVisualEffectViewto create the "frosted glass" effect common since iOS 7
CameraView, which wraps
AVCaptureVideoPreviewLayerto display a live camera feed, useful for building camera-based UIs.
MapView, which wraps
MKMapViewto display a mutable Apple Maps view. The coordinates and scale of the map can be read and controlled programmatically, so with a little effort interactive graphics can be precisely placed on top of geographical features.
ProgressPathView, which wraps
CGPathto allow using a
Pathof any shape to form a progress bar.
and many more.
ui2 provides an extensive set of tools for
fully creating complex and polished user interfaces including interactivity, animations, and
ui2 builds upon some Pythonista interfaces to add new features,
but also implements its own new pure-Python interfaces and wraps some new Objective-C APIs.
Some of the new non-
UIView features included in
ChainedTransitionclasses that interface with the Objective-C
[UIView transitionFromView:toView:duration:options:completion:]method to allow for animated transitions between views like cross dissolves, page flips, and 3D "card flip" effects (see the video to the left).
ui2.binddecorator which allows binding Python methods to keybaord shortcuts on bluetooth keyboards connected to the device, functionality which was previously entirely unavailable. Keyboard shortcuts even show up in the "discoverability" menu when the command key is held down on the keyboard.
ChainedAnimationclasses allow for playing multiple animations in sequence without manually creating delays for the duration of each animation.
DelayManagerclasses which allow not only for the scheduling of asynchronous actions in the future but for the subsequent cancellation and mutation of these events, which is impossible using Pythonista's built-in
ui2.Pathclass to replace
ui.Paththat stores each component of the path in memory so that it can be read back after the path is modified. The old
ui.Pathclass was write-only and it was impossible to retrieve information about the components of the path.
ui2.Pathintercepts every method call and remembers each piece added to the
For many years, I was one of the most active members of the community surrounding the iOS app Pythonista, a fantastic platform for mobile Python development, and one of the most powerful apps for developers on the iOS platform as a whole. During the time that I spent most heavily using the platform, I made several meaningful and well-used contributions to the software community surrounding the app, many of which took the form of libraries that brought new functionality to the app.
pythonista-theme-utils is a utility I wrote for Pythonista that allows users to
retrieve information about the user's current editor theme and to automatically style user
interfaces to match the user's current editor theme. This is useful for the
authors of scripts that are designed to perform tasks within the editor. At the time this
script was written, this functionality was unavailable. However, just four days after I
published my code, the developer of the Pythonista app integrated this functionality directly
into the app, along with a credit to me in the release notes.
Another significant contribution I made to the community was the initial creation and
maintainence of the collaborative
Pythonista-Tweaks module, which aims to
collect in one place some general enhancements, extensions, and modifications to
Pythonista's behavior, and to generally fill in gaps in Pythonista's functionality by
querying Objective-C APIs.
Pictured is an example of functionality that
Pythonista-Tweaks enables. The
script pictured displays progress in real-time as a badge on the app icon, useful for
keeping users informed about the progress of long-running tasks even when the app is not
Pythonista Cloud is an effort by me to create a full-fledged package manager for Pythonista which would allow downloading remote scripts and importing remote modules. The project is not complete, nor is it under active development at the moment, but it was a popular project; its components have gained loading... stars on GitHub.
The service consists of a REST API that communicates with an underlying CouchDB instance.
This allows authors to submit packages to the registry, and allows clients to query for
package information. The official client is a Python module called
exposes a custom import handler capable of downloading modules on the fly or exposing cached
versions seamlessly. The goal is that users can simply replace an
from cloud import x and achieve the same behavior without having
to worry about whether the user has installed