Category: Software, Coding, Programing

All the digital things

LED lit, laser-cut stage with performers

Greg’s Laser-Cut Plywood and LED Stage Build

This summer, I had the opportunity to design and build one of the stages for a small annual festival. I’d just returned from a 3-week bike tour through Portugal and Southern Spain, where I’d seen an abundance of amazing historical buildings, from cathedrals to ancient fortresses.

Inspired by the amazingly elaborate details and layers of cultural influence in the architecture I’d seen, I wanted to create an intricate laser-cut plywood design that incorporated LED strips for nighttime stage lighting, but that still looked visually interesting during the daytime performances. It also had to be built ahead of time and easily assembled on-site.

Catedral de Sevilla
Architectural detail on the Catedral de Sevilla; one of my main reference photos for design elements to use in the stage design.

I originally planned to use Rhino with Grasshopper for creating the design. Grasshopper provides a node-based way of scripting parametric models, and I’ve seen people make some incredible computational designs using it in combination with Rhino. Although I’d really like to learn how to use these programs, and they would have been a good fit for this project, due to time constraints, I stuck to the skills I already have from my background as a mechanical engineer. This meant using OnShape, an online Computer Aided Design (CAD) modeling program like SolidWorks or Autodesk Inventor.

To those unfamiliar with CAD tools that use parametric modeling, it works a bit differently than tools like Illustrator or Paint where you create the design directly. With parametric modeling, you define a series of geometric constraints, dimensions and formulas that define the shapes you’re trying to create.

Parametric models work a little bit like complex equations or a software code, in that it takes time to set them up, but once you do you can go back and adjust the inputs to get near-instant updates without having to recreate or manually tweak the design.

With parametric modeling, as when writing software, it’s good to follow the principle of, “DRY – Don’t Repeat Yourself.”

For the rose window element design, the first step was to look for any symmetries. In this case, it meant identifying the smallest “unit cell” that could be replicated to create the full design through mirroring, copying and patterning it. Fortunately every CAD tool has built-in commands to mirror and to rotationally pattern a part. These built-in commands make it easy to create the full piece from a smaller, simpler “unit cell,” while being able to update the original and see how it would look when patterned.

Overview of a rose window design, with lines showing the radial and mirror symmetry axes
Radial and symmetry axes of the rose window design.
A unit cell of a rose window design.
Rose window unit cell.

 

Creating a model for the pillars was more difficult. I wanted to be able to pattern a design along a gentle curve while having it adapt to the width between the curve and the centerline. There’s no built-in CAD command to pattern a part while changing the inputs that define the part (well, there is kind of but not in a way I was able to make work for this design). Instead, I set up a part for the pillar unit cell with different “configurations” where each configuration had the height and width of the bounding shape matching those measured along the curve. This was still a somewhat manual process because, if I changed the shape of the curve, I had to update the width and height of each part configuration to get it to match. That being said, with the curve fixed, I was able to change a single design and have all the instances of the unit cell update—my desired result. It’s worth noting that OnShape actually has its own scripting language, FeatureScript, which I could have used to write a custom command for the result I had hoped to achieve, but didn’t have the time. I plan to explore this approach more in the future.

Diagram of rose window column, lines indecating "mirror symetry" and "repeating unit cell."
Identifying symmetry and unit cell pattern in the pillar design.
Unit cell
Configurable “unit cell” for the pillar.

All this modeling was to make the files required for the laser cutter, which reads 2D line drawings.

Someone who is proficient at a vector art tool like Illustrator likely could have created the same final design in 1/10th the time it took me to set up this complicated parametric CAD model. That being said, I had fun modeling it this way and I got more familiar with OnShape along the journey!

Once I was happy with the design (by which I mean out of time to continue tweaking it), I exported everything and headed to the laser cutter.

Laser cutting mostly went smoothly, although it took two passes to get all the way through the 1/4″ birch plywood. The main issue for the bigger parts was just getting the plywood to lay flat enough to keep the laser in-focus. I used every magnet in the drawer and could have used even more!

I hit a snag with the high-quality “Exterior Grade” Baltic Birch Plywood I had originally purchased for the project from MacBeath Hardwood. Whatever the manufacturer treated it with to make it exterior grade, prevented the laser from cutting past the first glue layer. After having made this expensive error, I bought the cheaper 4×8′ regular “White Birch” sheets from MacBeath, which they helped me rip into thirds that fit nicely into both the laser-cutter work area and the back of my car. The total cut time was approximately 200 minutes, spread out across several long, late-night sessions. It took far more time to layout and fixture the cuts than actual active cutting time.

Rose Window glueup
Glue layup of the rose window element; could have used even more clamps.

The final assembled pieces have a solid back spaced 1.5″ from the front cutout parts. I cut “rib” parts out of 3/4″ plywood and doubled them up to get the 1.5″ spacing. I then joined the parts with wood glue and a nail gun (the nails are invisible from far away and provided good clamping force while the wood glue dried). This resulted in surprisingly light and stiff parts.

I created the detail on the front of the panels by gluing on smaller parts. This layup was challenging due to the sheer quantity of small parts.

For the pillars where the unit cell had many unique configurations, there were literally hundreds of small parts that all had to go in specific locations!

I added pre-fabricated holes to the laser-cut patterns for small brad nails which made it easy to align the small parts during assembly, and keep everything from sliding around during the glue-up. Once the glue was mostly dried I removed the brad nails so they wouldn’t become a permanent part of the assembly. The back panels are removable for installation and maintenance of LED strips glued along the inside face of the ribs. I used a silicone caulk for the LEDs, which works well as long as the ends of each LED strip are securely attached. The silicone caulk is strong enough to keep the strips in place, but easy to peel off if necessary.

Laser-cut pillars
Finished and stained panels.

At the festival, the pieces were in the capable hands of Radiant Atmospheres, an event lighting collective practically next door to Ace. They hooked up the LED strips to a DMX decoder, which let them control them from the same system they were using to drive the rest of the stage lighting and effects. They also brought two rear projection units that set up an ever-shifting psychedelic pattern on the stage backdrop. I was really impressed with the work they did; it’s a bit hard to capture in photos but the stage lighting was gorgeous. All-in-all I’m pleased with how this project came out, and excited to take lessons learned and continue to play with the laser-cutter and other tools at Ace!

DJ performing in front of the rose window element
Ukrainian artist “Asymmetry” performing with the rose window element behind.

Oh, and the band in the cover image is the enchanting Foxtails Brigade!

What worked:

  • The alignment holes and brad nails made the glue-up substantially easier; it would have been a real nightmare to get things lined up without them.
  • In the design I left strategic gaps between parts to create the illusion that some parts were behind others, even though they were on the same layer. This visual trickery seemed to work; I had a few folks tell me they were surprised that it only had two layers.
  • I loved the effect of the indirect LED lighting on the back panel, especially the regions lit by two different LED strips. It created smooth gradients that I thought were beautiful. The default with LED art is to create more complexity by adding an ever-increasing density of LEDs, but in this case I think less was more. It’s only five unique colors for all three of the panels, but the natural blending on the back panel made it seem more complex than it was. A happy accident of the constraints of the materials/budget I had to work with!

What could be improved:

  • Creating the design out of hundreds of small parts made assembly incredibly time consuming. Designing for fewer, larger parts with more complexity per part would have cut down on the time it took to assemble everything.
  • The ribs between the front and back layers were time consuming to make; I “scored” lines onto thick plywood with a light laser pass and then cut them out with a jigsaw at home. This took a long time and was difficult to do accurately, even with the precise guide lines created on the laser cutter. If I were certified on the CNC machine at Ace, that would have been a better way to go. Fortunately, the closest audience members were approximately 15′ away, and most of the mistakes were invisible from that distance.
  • In hindsight, it would have been interesting to score inset lines from the edges of the parts on the laser-cutter; that would have been an easy way to suggest even more depth & visual interest.
People buying cards, "BELIEVE humanity will survive" card

The Solano Human Project – Stephen’s Homebrew Alternate Reality Game

Ace Member, Stephen had a major birthday coming up and decided to get his family together to play an epic Alternate Reality Game (ARG) inspired by the Jejune Institute – an immersive game created by Oakland Based artist, Jeff Hull. Though Stephen had some previous experience making puzzles, creating an ARG with a coherent narrative based on a specific site presented a new challenge.

Tell your story and build your world in as few words as possible. People are here to play a game, not to read. Let them discover the world, as much as possible, through the gameplay. -Stephen’s advice

He used the Ace Laser to cut some of the key pieces for his adventure:

  • An acrylic box to hold kazoos
  • An acrylic screen with a logo for a movie inside a book.
  • A wooden mirror box for the grand finale

Keep reading for edited excerpts from his fabulous in-depth write up of The Solano Human Project on Medium.

Check out the full original post to immerse yourself in his fantastic Alternate Reality with: 

  • Great storytelling, and intricate worldbuilding
  • Text and instructions for each section of the game
  • Walk throughs for each clue, real player experiences, and lessons learned along the way (including one’s not mentioned in this shortened version)
  • Shoutouts to local businesses
  • More pics of the game in action

The Solano Human Project

For my birthday, I created an Alternate Reality Game for my family to play; it took place along a few blocks of Solano Avenue in Albany, California.

To start, I gave everyone a card that said:

"Instructions for The Game"

That’s right, I managed to rent a phone number with my initials in it! The card also introduced people to the game logo. I tried to mark almost every in-game item with the logo. (For reasons, as I write this blog post, the game is transitioning to a new logo so the ones in the pictures might be inconsistent.)

After you send the text, you receive a message from the year 2066:

We are texting you through a Time Portal from an Alternate Future Universe, where the Cyborgs have taken over. (Spoiler alert: it sucks!) We call ourselves Team Human. It is too late for us, but we hope we can help your timeline.

Next year — you call it “2024” — will be critical.

It is the year “FibJourney” floods social media with fake images. It is the year a self-driving car kidnaps the CEO of Raytheon. It is the year “ChetGPT” gains sentience and hacks into a NORAD base station.

To help you stop them, we’ll show you how the Cyborgs take power with their SLICED MEN weapon, and we’ll show you how to fight back with anti-Cyborg technology. And, most importantly, we’ll tell you the codename of the future leader of Team Human so you can contact her next year.

But we can’t do any of this in the open — ChetGPT is always listening, always scraping. So we will send you coded messages (and even objects) through the Time Portal. Pictures may look different from yours because they are from the Future. Act quickly — the Time Portal can only stay open for a limited amount of time.

YOU WILL GET A SERIES OF QUESTIONS.

TEXT THE QUESTION NUMBER, “:”, AND THEN THE ANSWER.

IF YOU DO NOT FOLLOW THIS FORMAT, YOUR TEXT WILL BE SENT TO A RANDOM PERSON.

This is basically the “intro video” to every escape room, to set the stage. It’s the longest text that will be sent during the game. (Random texts — like “hi” — would result in Easter eggs.)

The rest of this article is a spoiler-filled walk-through. If you want to play the game*, STOP READING NOW!

*Email [email protected] if you want to play.


I started the game with a couple of warm-up questions.

Q1:YOUR JOURNEY BEGINS AT 1286 SOLANO AVE. ARE YOU THERE NOW? [REPLY “1:YES”]

Large group of people on Solano Ave.
The game begins…

> 1:yes

The Cyborgs did not begin by waging open war against the humans.

At first, they freed us from all labor, and promised us lives of leisure.

But then they began to crush our spirits, starting with sports.

A NEARBY SPORTING GOODS STORE PROVIDED THE CYBORGS WITH EQUIPMENT FOR THEIR ENORMOUS BODIES.

Q2:WHAT IS THE NAME OF THE SPORT?
[REPLY “2:_ _ _ _ _ _”]

Tennis shop on Solano Ave.

> 2:tennis

TALL AS TREES, THE CYBORGS WERE UNSTOPPABLE AT NIMBLEDON.

The next clue was a dry cleaning receipt that lead my family to the first non-player character (NPC) interaction which helped set the tone for the rest of the game.

Person holding up a modified "A's" shirt.
One of the players holding up a freshly pressed, modified A’s shirt.

I made 3 T-shirts as swag, one for each kid who participated. I took a standard A’s shirt and used a Cricut to iron on a ring diacritic above the “A”, the “Team Human” logo on the right sleeve, and the kids first name on the left.

The Oakland Å’s — the Ångströms, named after the measurement in physics — will move from Oakland to Anchorage to escape California’s strict anti-Cybernetics-in-sports laws. (Their all-Cyborg team was the subject of the documentary ‘Androids in the Outfield’).
TO SAVE HUMANITY, THE ÅNGSTRÖMS MUST NOT BE ALLOWED TO LEAVE OAKLAND.

For those of you not in the Bay Area: in 2025, the Oakland A’s are planning to leave Oakland — their home for more than 50 years — for Las Vegas, and it’s highly unlikely that this decision will be reversed. A streak of gallows humor like this runs throughout the game.

The players then received instructions to buy a greeting card from another local business.

I took this photo of a nearby gallery and asked an AI (probably Stable Diffusion? I forget) to generate a dystopian robot-filled version of it. (I used a lot of AI tools to make a game about AI taking over the world! Irony!)

AI generated Robot version of a local mural.
AI generated dystopian robot-filled version of the mural on the outside of a local gallery.

They located the card and found the next clue:

People buying cards, "BELIEVE humanity will survive" card
Stephen generated the image for this card with DALL-E.

Inside, the text of the card reads “If we work together we can defeat the AI.” Within the folds of the card I tucked this photo, which I made with Adobe Firefly:

Original Pizza mural, and AI robot Version of Mural
(Left) AI generated dystopian version of local pizza shop mural. (Right) Original Mural.

After following a few more clues including an elaborate geocache puzzle, the players found a PO Box Key (learn more about these clues and some of the important lessons learned on Medium). Inside the mailbox was an acrylic box I custom built at Ace Makerspace.

"DYN-O-MITE" box, people holding fake dynamite
The mailbox contained an acrylic box which I custom built at Ace Makerspace in Berkeley.

There were several layers of deception here. The box says it’s filled with candy. But the “candy” looks like a weapon (sticks of dynamite). But the “dynamite” is actually a toy (a kazoo). But the kazoo is actually a weapon (in the context of the game).

THE KAZOO WAS THE ULTIMATE WEAPON AGAINST THE CYBORGS — THE SOUND PERMANENTLY DISRUPTS CYBORG CIRCUITRY.

(If someone complains about the sound of a Kazoo, they likely are part Cyborg.)

Will playing the kazoo really stop the robots from taking over the world? Maybe! Couldn’t hurt!

Some people thought this puzzle was the big finale…but wait, there’s (a lot) more:

Eventually the players found their way to the Berkeley Human Thrift Shop (Actually the Berkely Humane Thrift Shop) to find the next clue. They knew it was a book, but which book?

Photoshopped sign "Berkely Human Thrift Shop"
(Top) Altered sign for “Thrift Shop Berkeley Human.” (Bottom Left) Book Safe with “Buy Me” card. (Bottom Right) book safe with Video Screen. The Acrylic for the screen was cut at Ace Makerspace.

The players also came across this flyer/clue in the window of a local coffee shop.

"KAZOO LESSONS" Poster
Kazoo Lessons clue hung in the window of a coffee shop
Group sits on bench outside.
The players call the number on the “KAZOO LESSONS” poster to solve the clue.

For the grand finale, I created a tiny Kusama mirror room in a wooden box I built at Ace Makerspace filled with some figurines I bought in Mexico, and tucked it in there:

Mirror box with small figurines
(Top Left) Utility box with hidden clue, (Bottom Left) Players opening the box, (Right) The mini world mirror box.

The answer to the puzzle is on a sticker in the box.

YOU DID IT! CONGRATULATIONS, AND GOOD LUCK NEXT YEAR. YOU’LL NEED IT.

I wanted to end with a bang, but I also didn’t want my route to involve any backtracking, so it wasn’t possible to finish at the thrift store or the post office. I think this worked fine as a finale, people were very surprised to see this little world inside a semi-public space, and everyone took pictures of it.

Total playtime was about 90 minutes. I gave very few hints (mostly things like “don’t guess!”, and “keep walking”).

TAKEAWAYS:

  • Branding to indicate what items are in-game worked well; I should have applied it more consistently. You can always make the branding more subtle to make the puzzle harder.
  • Likewise, if you are vague about whether there are any NPCs in your game and how they would be identified, you need to be careful that your players don’t act inappropriately with innocent bystanders.
  • Edit edit edit! People want to do the least amount of reading possible to play your game; try to convey your story with as few words as possible
  • Hearing people say “Don’t give us any hints!” is a good sign — an indication they trust the puzzle and that the payoff will be worth the effort.
  • Playtest, even for a one-off! I did a walk-through of the game the night before with a friend, which uncovered many problems with both the technology and the narrative.

THE END

Coworking for Computer Vision

Hi, my name is Mark. I’ve been a member of ACE for almost 9 years. There’s been three things on my To-Do list gnawing at my psyche for some time:

  1. Learn about Raspberry Pi microprocessors through Internet of Things (IoT) applications.
  2. Get hands-on experience with Artificial Intelligence.
  3. Learn the popular Python programming language.

Why these? Because computers are getting smaller while getting more powerful; Artificial Intelligence (AI) is running on ever smaller computers; and Python is a versatile, beginner-friendly language that’s well-documented and used for both Raspberry Pi (RPi) and AI projects.

I’ve been working in computer vision, a field of AI, for several years in both business development and business operations capacities. While I don’t have a technical background, I strive to understand how the engineering of products & services of my employers works in order to facilitate communication with clients. Throughout my career I’ve asked a lot of engineers a lot of naive questions because I’m curious about how the underlying technologies come together on a fundamental level. I owe a big thanks to those engineers for their patience with me! It was time for me to learn it by doing it on my own.

Computer Vision gives machines the ability to see the world as humans do – Using methods for acquiring, processing, analyzing, and understanding digital images or spatial information.


In starting on my learning journey I began a routine of studying at our ACE Makerspace coworking space every week to be around other makers. This helped me maintain focus after the a pandemic induced a work-from-home lifestyle that left me inhibited by a serious brain fog.

My work environment at ACE Coworking

OpenCV (Open Source Computer Vision Library) is a cross-platform library of programming functions mainly aimed at real-time computer vision. AMONG MANY COMPONENTS It includes a machine learning library as a set of functions for statistical classification, regression, and clustering of data.

Fun Fact: Our ACE Makerspace Edgy Cam Photobooth seen at many ACE events uses an ‘Edge Detection’ technique also from the OpenCV Library.

A self-paced Intro to Python course came first. Then came a course on OpenCV which taught the fundamentals of image processing. Later still came tutorials on how to train a computer to recognize objects, and even faces, from a series of images.

Plotting the distribution of color intensities in the red, green, and blue color channels

 

3D scatter plot of distributions of grouped colors in images

 

A binary mask to obtain hand gesture shape, to be trained for gesture recognition

 

Notice the difference in probabilities associated with the face recognition predictions when the face is partially occluded by face mask

Eventually, I moved onto more complex projects, including programming an autonomous mini robot car that responds to commands based on what the AI algorithm infers from an attached camera’s video feed – This was real-time computer vision! There were many starter robot car kits to choose from. Some are for educational purposes, others come pre-assembled with a chassis, motor controllers, sensors, and even software. Surely, this was the best path for me to get straight into the software and image processing. But the pandemic had bogged down supply chains, and it seemed that any product with a microchip was on backorder for months.

A backlog of cargo ships waiting outside west coast ports as a symbol of supply chain issues

I couldn’t find a starter robot car kit for sale online that shipped within 60 days and I wasn’t willing to wait that long. And I didn’t want to skip this tutorial because it was a great exercise combining the RPi, AI, and Python programming triad. ACE Makerspace facilities came to the rescue again with the electronics stations and 3D printers which opened up my options.

I learned a few things working at computer vision hardware companies: Sometimes compromises are made in hardware due to availability of components; Sometimes compromises are made in the software due to the lack of time. One thing was for sure, I had to decide on an alternative hardware solution because hardware supply was the limiting factor. On the other hand, software was rather easy to modify to work with various motor controllers. 

So after some research I decided on making my own robot car kit using the JetBot reference design. The JetBot is an open-source robot based on the Nvidia Jetson Nano, another single board computer more powerful than the RPi. Would this design work with the RPi? I ordered the components and shifted focus to 3D printing the car chassis and mounts while waiting for components from Adafruit and Amazon to arrive. ACE has (2) Prusa 3D printers on which I could run print jobs in parallel;



When the parts arrived I switched over to assembling and soldering (and in my case, de-soldering and re-soldering) the electronic components using ACE’s electronics stations equipped with many of the hand tools, soldering materials, and miscellaneous electrical components. When fully assembled, swapping in the Raspberry Pi for the Jetson Nano computer was simple and it booted up and operated as described on the JetBot site.

Soldering
It’s ALIVE! with an IP address that I use to connect remotely

The autonomous robot car starts by roaming around at a constant speed in a single direction. The Raspberry Pi drives the motor controls, operates the attached camera, and marshals the camera frames to the attached blue coprocessor, an Intel Neural Compute Stick (NCS), plugged into and powered by the Raspberry Pi USB 3.0 port. It’s this NCS that is “looking” for a type of object in each camera frame. The NCS is a coprocessor dedicated to the application-specific task of object detection using a pre-installed program called a MobileNet SSD – pre-trained to recognize a list of common objects. I chose the object type ‘bottle’.

“MobileNet” because they are designed for resource constrained devices such as your smartphone.  “SSD” stands for “Single-shot Detector” because object localization and classification are done in a single forward pass of the neural network. In general, single-stage detectors tend to be less accurate than two-stage detectors, but are significantly faster.

The Neural Compute Stick’s processor is designed to perform the AI inference – accurately detecting and correctly classifying a ‘bottle’ in the camera frame. The NCS localizes the bottle within the camera frame and determines the bounding box coordinates of where in the frame the object is located. The NCS then sends these coordinates to the RPi; The RPi reads these coordinates, determines the center of the bounding box and whether that single center point is to the Left or Right of the center of the RPi’s camera frame.

Knowing this, the RPi will steer the robot accordingly by sending separate commands to the motor controller that drives the two wheels:

  • If that Center Point is Left of Center, then the motor controller will slow down the left wheel and speed up the right wheel;
  • If that Center Point is Right of Center, then the motor controller will slow down the right wheel and speed up the left wheel;

Keeping the bottle in the center of the frame, the RPi drives the car towards the bottle. In the lower-right corner of the video below is a picture-in-picture video from the camera on the Raspberry Pi. A ‘bottle’ is correctly detected and classified in the camera frames. The software [mostly] steers the car towards the bottle.

Older USB Accelerators, such as the NCS (v1), can be slow and cause latency in the reaction time of the computer. So there’s a latency in executing motor control commands. (Not a big deal for a tabletop autonomous mini-car application, but it is a BIG deal for autonomous cars being tested in the real world on the roads today.) On the other hand, this would be difficult to perform on the RPi alone, without a coprocessor, because the Intel NCS is engineered to perform the application-specific number-crunching more efficiently and while using less power than the CPU on the Raspberry Pi.

Finally, I couldn’t help but think that there was some irony in this supply chain dilemma that I had experienced while waiting for electronics to help me learn about robots; Because maybe employing more robots in factories will be how U.S. manufacturers improve resilience of supply chains if these companies decide to “onshore” or “reshore” production back onto home turf. Just my opinion.

Since finishing this robot mini-car I’ve moved on to learn other AI frameworks and even training AI with data in the cloud. My next challenge might be to add a 3D depth sensor to the robot car and map the room in 3D while applying AI to the depth data. A little while back I picked up a used Neato XV-11 robot vacuum from an ACE member, and I might start exploring that device for its LIDAR sensor instead.

Let me know if you’re interested in learning about AI or microprocessors, or if you’re working on similar projects. Until then, I’ll see you around ACE!

Mark Piszczor
LinkedIn

Made at AMT-June 2019

NOMCOM Fob All The Things dashboard | AMT Software • Bodie/Crafty
Hand Built Speaker | Workshop • David
Recycling Game | Workshop/Laser • Bernard M.
Solid wood credenza | Workshop | Raj J.
Tiny electronic brass jewelry | Electronics | Ray A.
RFID Mint Dispensing Box | Laser+Electronics | Crafty
Wood Signage | CNC Router | James L.
Fabric Kraken stuffed with 720 LEDs | Textiles + Electronics | Crafty

Designing a replacement tool grip in Fusion 360

This is what our filament nippers looked like in 3D printing.

The work fine, but one of the rubber grips has almost split in two.

A few weeks ago, Evan made a valiant effort at saving them:

But, alas, the patch quickly broke off.

It’s a great excuse for another Fusion 360 3D printing article!

I’ll make the replacement in PLA. It won’t be squishy like the original, but it’ll be more comfortable than the bare metal.

To model it, I took a photo; then used Fusion’s ‘attached canvas’ feature. The easy way to use this feature is to simply import the image, without entering any dimensions at all. Then, right click the attached-canvas object in the browser and select calibrate. Fusion will prompt you to select two points. I’ve chosen the little hole near the joint and the end of the tang, which measures 98.6mm

Now we can make a sketch of the profile. I fitted arcs to the shape as near as I can. I find this easier than using splines when the shape allows for it. I used the ‘Fix’ tool instead of dimensions, since the scaled photo is what really defines the size here. I did not bother modeling the business end of the tool.

Next I extruded this profile to a 2mm thickness.

This was done in a component called tang. Next I created a new component called grip and sketched the outer profile. I projected the tang outline first; then offset the lower and sketched the upper end to eyeball-match the existing grip.

This was extruded ‘downward’ to create the basic shape of the lower half of the grip.

Next, I sketched a profile and cut away a depression for the inner part. This profile was offset from the tang outline very slightly (0.2mm) to allow for a reasonable fit. In this case, I may have to adjust the dimensions for fit a few times anyway, so this step could probably be omitted.  Still, I think it’s good practice to explicitly design appropriate fit clearance for mating parts.

A chamfer on the bottom completes the grip. It’s not an exact match but it’s close enough.

Finally, I mirror the body to make the top half of the grip. I’ll print in two pieces and glue them together to avoid using support material.

When I don’t know for sure that I have the size of something right, I often print an ‘abbreviated’ version to test the fit. This part’s small enough that I probably don’t need to, but just to illustrate the step, here’s what I do. Use the box tool, with the intersect operation. Drag the box until it surrounds the area of interest. Precise dimensions are not necessary here; we’re just isolating the feature to be tested.

In this case, I’ve simply shaved off the bottom few millimeters. I can cancel the print after just a few layers and see how well it fits the handle.

Once I’m done testing, I can simply disable (or delete) the box feature in the history timeline.

Let’s print it and see what we’ve got!

Hm… not quite. The inner curve seems right, but the outer is too tight. I’ll tweak the first sketch and try again.

This one’s still not perfect, but I think it’s close enough. Here are the complete parts, fresh off the printer.

The fit is okay but there are a few minor issues: The parts warped very slightly when printed, and the cavity for the tang was just a hair too shallow.

A bit of glue and clamping would probably have solved the problem but I had to knock off for the day anyway, and took a bit of time the next day to reprint at my own shop. I even had some blue filament that’s a closer match to the original grip.

Here it is, glued and clamped up. I gave the mating faces a light sanding to help the glue stick better. I used thick, gel-style cyanoacrylate glue, which gives a few seconds to line things up before it grabs. It seems to work very well with PLA.

And here’s the result. Let’s hope it lasts longer than the original!

But wait… Has this all been worth it?

Well, probably not. I found brand-new nippers from a US vendor for $3.09 on eBay. They’re even cheaper if you order directly from the Far East.

Oh well.  I think the techniques are worthwhile to know. The main thing is that it made for a good blog post!

 

AMT’s Adventures at Maker Faire 2018

The Art Printing Photobooth aka The Edgy Printacular

At the Bay Area Maker Faire 2018, a team of Ace Monster Toys members created a photobooth where participants could take selfies which were then transformed into line art versions and printed, all initiated by pressing one ‘too-big-to-believe’ red button.

Back in March, AMT folks began prepping for Maker Faire 2018, and had an idea: what if you made a machine that could take a selfie and then generate a line art version of the said selfie, that could then be printed out for participants like you and me?! Thus, the Art Printing Photobooth was born! This project was based on the Edgy Cam project by Ray Alderman. AMT created a special slack channel just for Bay Area Maker Faire 2018 #maker-faire-2018. Then members set about figuring out how exactly to make this art-generating-automaton and Rachel (Crafty) campaigned for having a ‘too-big-to-believe’ push button. They would need many maker skills: CNC routing and file design, woodworking, electronics wiring, and someone to art it all up on the physical piece itself. Bob (Damp Rabbit) quickly volunteered to take on the design and CNC cutting, while Ray (whamodyne) started to chip away at the code that would be used to convert photos to line art.


Then the trouble began. By mid-April, our intrepid troubleshooters were running into all sorts of snags – so much so that the original code needed to be thrown out and rewritten from the ground up! To add additional difficulty (and awesomeness!) the team decided to use a Print on Demand(POD) service to allow participants to have their generated art uploaded and available to be printed on mugs, t-shirts, posters, etc. Soon after, Ray wrote up a new digispark code for the big-red-button to actuate the script and convert and print the line art (code given below) using Python3, opencv library, printer library from https://github.com/python-escpos/python-escpos.


Meanwhile, Crafty Rachel and Bernard were configuring the TV mount that would be the selfie-display of the photobooth and Damp Rabbit was busy CNCing and painting up a storm to create the beautiful finished product – The Edgy Printacular! The EP was a hit and won three blue ribbons at Maker Faire 2018. Another happy ending that speaks to what a few creative makers can do when they put their heads together in a place with all the right equipment, Ace Monster Toys <3

Big empty room

AMT Expansion 2018

This month AMT turns 8 years old and we are growing! We have rented an additional 1200sqft suite in the building. We have a Work Party Weekend planned June 1-3 to upgrade and reconfigure all of AMT. All the key areas at AMT are getting an upgrade :

CoWorking and Classroom are moving in to the new suite. Rad wifi, chill space away from the big machines, and core office amenities are planned for CoWorking. The new Classroom will be reconfigurable and have double the capacity.

Textiles is moving upstairs into the light. The room will now be a clean fabrication hub with Electronics and 3D Printing both expanding into the space made available. Photo printing may or may not stay upstairs — plans are still forming up.

Metal working, bike parking, and new storage including the old lockers will be moving into the old classroom. But before they move in the room is getting a face lift by returning to the cement floors and the walls will get a new coat of paint.

The CNC room and workshop will then be reconfigured to take advantage of the space Metal vacated. We aren’t sure what that is going to look like beyond more workspace and possibly affordable storage for larger short term projects.

Town Hall Meeting May 17th • 7:30PM • Plan the New Space

What expansion means to membership

The other thing that happened in May is after 8 years our rent finally went up. It is still affordable enough that we get to expand. Expansion also means increasing membership volume to cover the new rents and to take advantage of all the upgrades. We are looking to add another 30 members by winter.  Our total capacity before we hit the cap will be 200 members. We feel that offering more classes and the best bargain in co-working will allow us to do this. Please help get the word out!

The New Suite in the Raw

Big empty room

Fusion 360 Hangout Notes

We had a great session last night (2-12-18) at the Fusion 360 hangout.

  • I burned most of the time presenting the design discussed in my recent blog post on best practices. I fielded lots of questions and expanded on some of the points in that post, so everyone seemed to get something out of it.
  • Chris has been struggling with sketches that began life as imported DXF files. Lots of funny duplicated lines in the skeches we looked at. We kicked around a few ideas for him to try, but nobody had the magic answers.
  • Steve has been playing with Fusion’s Drawing feature & had some neat things to show.
  • Bob showed us some of his progress carving Guitar parts. This is complex CAM stuff involving multiple operations and remounting parts to carve two sides. Can’t wait to see the progress.

A ‘pair-programming-style’ hangout was proposed for a future session. I think it’s a GREAT idea… We work together in pairs, sharing experience and generally bouncing ideas off each other while working through real member projects.

This kind of meeting can be run by anyone… and I’m looking for volunteers. I think a group meeting would be a lot of fun… that way we could negotiate which projects we might be able to help most with, or are most interested in. …but it doesn’t _have_ to be a group meeting. If nothing else, feel free to pipe-up in this forum anytime you get stuck and think an extra set of eyes would help. And _do_ make yourself available to others: I’ve learned a great deal about Fusion through other folks’ projects, since they so often approach the tasks in a way that would never occur to me.

By popular request, I’m going to put together a more traditional class for next time, focusing on beginners. The hands-on format was overwhelmingly preferred to anything else we’ve tried, so we’ll go with that. No schedule yet; watch this space!

The Vorpal Combat Hexapod

I demonstrated this fun robot at the last BoxBots build night and our general meeting last Thursday. Since then a few folks have asked questions so I thought I would post more detail.

The Vorpal Combat Hexapod is the subject of a Kickstarter campaign I discovered a few weeks ago. I was impressed and decided to back the project. I had a few questions so I contacted the designer, Steve Pendergrast. Then I had a few suggestions and before long we had a rich correspondence. I spent quite a bit more time than I’d expected to, offering thoughts for his wiki, design suggestions, etc.

Steve appreciated my feedback and offered to send me a completed robot if I would promise to demonstrate it for our membership. The robot you see in the photos was made by Steve, not me. Mine will be forthcoming!

You can read the official description on the Kickstarter page and project wiki. Here are my own thoughts and a few of the reasons I like the project so much.

It’s cool!

It has to be to get the kids interested; something that Ray has always understood with BoxBots. While BoxBots offers the thrill of destructive combat, the hexapod offers spidery, insect-ish, crawly coolness with interactive games and programming challenges.

It’s a fun toy

Straight away, this robot offers lot of play value. There are four walk modes, four dance modes, four fight modes, and a built-in record/playback function. To get them interested in the advanced possibilities, you have to get them hooked first. Don’t be intimidated by that array of buttons. At the Boxbots build night, the kids all picked it up very quickly. I couldn’t get the controller out of their hands.

It’s open-source

The circuitry, firmware, and plastic parts are already published. A lot of crowd-funded projects promise release only after funding, and some only publish the STL files, which can be very difficult to edit. Steve has provided the full CAD source (designed in OnShape).

Easy to Accessorize

The Joust and Capture-the-flag games use special accessories that fasten to a standard mount on the robot’s nose. This simplifies add-on design since there’s no need to modify the robot frame. There are also magnets around the perimeter, encouraging fun cosmetic add-ons like eyes and nametags.

Off-the-shelf electronic components

There are no custom circuit boards here. It’s built with two Arduino Nano boards, two Bluetooth boards, a servo controller, buzzer, pot, micro-SD adapter, two pushbutton boards, inexpensive servos, etc. This stuff is all available online if you want to source your own parts. If you’re an Arduino geek, it will all look familiar.

No Soldering!

I think every kid should learn how to use a soldering iron in school, but for some it remains an intimidating barrier. In the hexapod, everything’s connected with push-on jumper wires. (If you source your own parts you will probably have to solder the battery case and switches, since these seldom have matching connectors.)

Scratch programming interface

The controller and robot firmware is written in Arduino’s C-like language, but the robot also supports a beginner-friendly drag-and-drop programming interface built with MIT’s Scratch system. I confess, I haven’t investigated this feature yet, but I’ve been curious about drag-and-drop programming paradigms for years. My first programs were stored on punched cards. Finally, I have an opportunity to see how today’s cool kids learn programming!

It’s 3D printed

The parts print without support, and work fine at low-resolution. You’ll want to get your own spool of filament so you have the color available for replacement parts. Any of our printers will work. I’ve had good luck so far with PLA, but Steve recommends more flexible materials like PETG or ABS.

Anyway, enough gushing. I do not have any financial interest in the project. I just like to encourage a good idea when I see one. The Kickstarter campaign just reached its goal a few days ago, so it’s definitely going to be funded. If you’d like to back the Kickstarter or learn more, here’s the link. You’ll have to act fast; there are only a few days left. (Full disclosure: I do get referral perks if you use this link.) Remember that you always assume some risk with crowd-funding. I’ll make no guarantees, but I’m satisfied that Steve is serious about the project and is no scammer.

Click here for the Hexapod Kickstarter campaign.

If you’d like to see this robot in person, contact me on Slack. I’ll try to arrange a demo.

-Matt

A note on Fusion 360 for the big CNC

The gcode emitted by Fusion 360 using the default settings does not work on our big CNC. Rama figured out that manually editing the gcode and removing the first six lines gets around the issue.

I was curious about this and decided to investigate. I reverse-engineered the codes in the preamble, but all seemed to be perfectly valid Mach 3 g-code. Finally, I found the culprit: G28.

g28screenshot

It turns out that there’s a simple solution: Click post process to create the gcode.  Then open the Properties pane and un-check useG28. This option also controls some related codes at the end of the file.

g28codeshot

I do not recommend deleting the entire six-line preamble! It sets up various values in Mach 3’s brain, and omitting them may be give unexpected results. It sets units to Metric or Imperial, for example. If omitted, your job might be unexpectedly scaled to a weird size.

That’s all you really need to know! Read on if you’re interested in the details.

The issue is covered in this article:

http://discuss.inventables.com/t/learning-about-g28/12205

Briefly, G28 is used to return the cutter-head to the home position. If your CNC machine has end-stop switches, Mach 3 can be configured to move to the physical limits of its travel, which is often a convenient parking place for the cutter-head at the end of the job. It also resets Mach 3’s zero position in case you have some kind of permanent workpiece mounting arrangement that always positions the workpiece in the same place.

We don’t use the big CNC this way. Instead, we mount workpieces in a variety of ways and manually set the zero position before each the job. The article above makes a case for implementing G28, but I don’t think it’s applicable for us.

I figured this out by digging into the code. It turns out that the tool-path is converted to gcode by a nicely commented Javascript program. Search your system for ‘mach3mill.cps’ It will be buried down in the bowels of your application tree somewhere, and is probably in a different place for PCs vs. Macs. I looked for the G28 code, found it was controlled by an option, then finally googled for that option to locate the above post. Anyway, it’s good to know that we have flexibility if we need to further customize gcode generation.