Repairing a Minolta XG-9 camera

This is the story of how I repaired, broke, and repaired again a Minolta XG-9 35mm SLR film camera.

It will be long, and of niche appeal, but I’m writing it up in case anyone else finds it as useful as I would have done before I started. Here’s the plot summary:

  • The camera’s shutter was sometimes sticking open when fired in Auto mode
  • I set out to fix this by cleaning the electrical contacts of the film-speed selection switch, beneath the camera’s top plate
  • That appears to have been the correct fix, but when reassembling the camera, I broke a fragile plastic part which holds the power switch in place, rendering the camera useless
  • I modelled a replacement part using 3D modelling software, had it 3D printed, and replaced the part in the camera
  • The replacement part is good enough to use, but could probably have been better; I’ve published the model for it, and would appreciate any ideas

If that sounds in any way interesting, read on. This was my first attempt at repairing a camera and my first experience of 3D modelling and printing, so it was certainly interesting to me.

The original problem: a sticking shutter

OLYMPUS DIGITAL CAMERA

The camera is one that I wrote a happy post about last year (“A film camera“). It was made in 1980 or so, but I bought it in 2018.

It worked well, except that the shutter would sometimes stick open when fired in Auto mode, and the only way to close it was to switch out of Auto. When stuck open, it often let in enough light to ruin both the previous and following photos as well as the current one. I was keen to fix it, especially as these cameras are designed so that the light meter only works when in Auto mode.

The first hint I found online was here, in a photo.net post from 2003 by “rokkor fan” who writes: “I had this once on a XG-1, and it was as a result of the mirror slap jarring the circuitry under the shutter release. I had a friend clean the circuits and it worked a treat after that.” No more details there, but a poor electrical contact sounds promising.

There’s a 238-page service manual available for these cameras (thank you Benoît Suaudeau) and it has an electrical troubleshooting section starting at page 174 that says:

Annotation 2019-07-04 204232“AUTO… curtain is kept open” looks like our problem, and “ASA contact defective” seems worth checking. ASA refers to the film-speed selection switch, which is on the top of the camera. It makes sense that a lost contact from that switch would only affect Auto mode, since Auto needs to know the film speed to decide how long to leave the shutter open.

There’s a YouTube video here, by Florian Buschmann, which shows how to take the lid off. The top cover has three axes poking out of it, one for the rewind knob and power switch, one for the shutter button and film speed knob, and one for the winding lever. The components attached to the tops of these can all be unscrewed from their axes and removed, leaving a plastic top-plate attached via four small screws, two at front and two at the back towards the middle of the camera. Here’s the camera with the top off:

Minolta XG-9 with top plate removed

The film-speed switch is beneath the large black nubbin poking out of the top about 3/4 of the way across from the left. The smaller nubbin at the left side is the power switch mechanism and rewind axis, which is the part I was just about to break while reassembling the camera.OLYMPUS DIGITAL CAMERA

Unscrewing the parts atop the film-speed switch reveals a brush mechanism with a flat contact plate. I cleaned the plate with switch cleaner (it was quite grubby) and made sure the brushes were sticking out enough on the switch side, then reassembled the top of the camera.

After reassembly I tested the shutter in Auto mode a few times, and it didn’t stick. Flushed with success, I set out to load a film — and that’s when the power switch came loose. While screwing on the metal collar that keeps the power switch in place, I had managed to shear off the entire threaded top of the plastic part that holds it down. A problem.

The broken part

Minolta part 2006-3309-2, Rewinding axis receiverThe picture on the right shows the part I had broken. It’s intact in this photo, because this was taken during one of my initial attempts to glue it back together again — which always failed when force was applied to screw the threaded collar back on the top.

The 40-year-old plastic is quite brittle, and as you can see, I also broke one of its little lugs just unscrewing it from the body. And I am a reasonably delicate person.

Without this part, it’s impossible to use the power switch: it is left floating, switching between on, off, and self-timer modes at random. Having failed to glue it, my options seemed to be:

  1. Write off the camera. I wasn’t going to do that yet.
  2. Figure out some ingenious bodge to keep using the power switch even though the official mechanism didn’t work. Well, I did sort of do this using a piece of thread attached to the switch mechanism, but I didn’t like that very much.
  3. Find a replacement part. For an easily-broken part in a relatively inexpensive 40-year-old camera from a company that stopped making cameras over a decade ago, that seemed unlikely, and my first searches came up with nothing.
  4. Buy a non-working camera, sold for parts, and plunder it for this part. Yes, but what if it just broke in the same way again? That would be unbearable. Or if this part was already broken in the other camera?
  5. Make a new part. This takes time and money and might not work, but it puts the outcome in my own hands and hopefully teaches me something new.

My first assumption, knowing very little about 3D printing, was that this component was too fiddly to be produced that way — especially the rather fine threaded bit at the top. But reading about different 3D processes, it seemed faintly possible that a nylon SLS printer, with 0.1mm resolution, might just be able to produce a viable part, especially as the thread was required to screw into a metal collar (which could perhaps do a bit of self-tapping) rather than a plastic one.

Modelling the part

Here’s this component in the service manual, identified as “Rewinding axis receiver”. It comes in two variations with codes 2006-3309-02 and 2006-3309-04. I think mine is a 2006-3309-02.Part in maintenance manual

First I needed a 3D model.

I measured it with a vernier caliper and some close-up photos. The thread seems to be 0.5mm x 7mm and the three screw-fixing lugs are spaced at 0°, 140°, and 220°. A hole through the middle admits a metal axle of 4mm diameter. There is a cutout in the collar at the bottom, into which the mechanism to release the film door when the axle is lifted fits. There’s also a slot in the midriff which a metal clip pokes into, to meet a notch in the axle that snaps it into its usual lowered position.

Knowing nothing about 3D modelling software, I asked my kids for advice first. Their suggestion was Blender, but I’m a bit afraid of Blender. I did try Wings3D, another free application (written in Erlang using wxWidgets — interesting!) but I got stuck on how to model a screw thread, and decided I should probably start with a trial version of something expensive with extensive documentation.

I came across Autodesk’s Fusion 360 while searching for screw thread tutorials, when I found a page that basically just said “use the thread tool”, which seemed about right. Fusion 360 is a very expensive subscription product with cloud-storage lock-in, but it has a one-year trial period for “hobbyist” use. That’s me!

Annotation 2019-07-05 203934.png

I really enjoyed using Fusion 360, and I waxed lyrical about it here on Twitter. Much the appeal has to do with good interactive feedback, but the core thing is that it’s built on a very nice 2D sketching program: the expectation seems to be that you sketch in 2D and then extrude into 3D, which I find a lot simpler than trying to design in 3D. But I don’t know how other software of this kind works. Anyway, I successfully built a nice-looking model, though with a thread that I suspected wasn’t really possible to print.

3D printing

I exported the model as an STL file and sent it for nylon/polyamide SLS printing at i.materialise. I have to admit that I picked this company because they had the most anonymous, automatic-looking order page, and I felt embarrassed about having a real person look over my impossible design. I didn’t notice at first that i.materialise are actually in Belgium, and it’s slightly crazy to send your model from London to Belgium for printing when there are companies in London (digits2widgets, 3DPRINTUK) doing the same thing, but by that point I was sort-of committed.

Because of the minimum order price, I ordered three different pieces. Two were from my best attempt at the model, which I asked for once in SLS (laser) process and once in MJF (HP’s ink-based) process. The third was an SLS print with slightly different dimensions for some things I wasn’t sure of. The pieces took about a week to arrive, and the price was quite high — the nominal charge for printing each item was £11.15, but that was quoted before VAT and shipping, and the eventual total was very close to £50.

Here are the results. The black part on the left is the broken original, the white one is the SLS of the variant model, and the grey one on the right is the MJF. (The colour doesn’t matter, as the part is not visible from outside the camera.)

3D printed Minolta parts

Receiving these was a really exciting moment: a true marvel that it was actually possible to design and build this neat little component with no worthwhile expertise on my part!

Of the two processes, the MJF version came out a bit fatter than the SLS — the holes are smaller, the walls are thicker. I think the SLS copy is more true to the design size. Although the texture of the printed nylon is unpolished (they both feel a bit like paper) and looks almost crumbly, both of them feel very solid, are harder and less flexible than I expected, and seem pretty tough.

The threads are indeed pretty sketchy. The SLS one is a little too narrow, without enough depth to the thread: the collar doesn’t tighten well and can easily jump a thread if pushed. The MJF has even less visible thread, but it is a bit too fat, and it is hard to fit the collar onto it at all. In this situation that would probably be a good thing, but the thicker walls of the MJF part interfered with the film-door release disc underneath the part, so it was an SLS copy that I ended up fitting. Here it is in the camera.

3D printed Minolta part fitted in XG-9 camera

I think that the thread, although weak, may hold well enough. It isn’t the last line of defence against the power switch falling off: the rewind knob on top of it, held on with a metal-on-metal screw fitting, is there to prevent that. This thread just needs to stop the power switch from lifting and losing its position while loading or rewinding film. But it would be nice to have done better, if it were possible to do so. I’d like to know if there is some technique for making threads more “printable” at this kind of scale.

Conclusion

This feels like a successful outcome, and at least the original problem is fixed. It would obviously be better not to have broken this part at all — and although it was my fault, I did spend some time mentally railing at Minolta for using a plastic thread here in the first place. Perhaps the main lesson is just that old plastic is fragile. But I enjoyed the process and am happy with the result, which is after all a camera that works better than it previously did.

I’ve published the 3D model, in a Github repository at cannam/minolta-2006-3309-2, in case it is of use to anyone else. If you’ve any suggestions for how this could have been done better, I’d like to hear! Of course this does show a big limitation of using Fusion 360 to do the modelling: the main file in that repo is in a proprietary format and probably useless to any other tool. I’ve included a couple of exports as well, including the STL file.

 

MIREX 2018 submissions

The 2018 edition of MIREX, the Music Information Retrieval Evaluation eXchange, was the sixth in a row for which we at the Centre for Digital Music submitted a set of Vamp audio analysis plugins for evaluation. For the third year in a row, the set of plugins we submitted was entirely unchanged — these are increasingly antique methods, but we have continued to submit them with the idea that they could provide a useful year-on-year baseline at least. It also gives me a good reason to take a look at the MIREX results and write this little summary post, although I’m a bit late with it this year, having missed the end of 2018 entirely!

For reference, the past five years’ posts can be found at: 2017, 2016, 2015, 2014, and 2013.

Structural Segmentation

No results appear to have been published for this task in 2018; I don’t know why. Last time around, ours was the only entry. Maybe it was the only entry again, and since it was unchanged, there was no point in running the task.

Multiple Fundamental Frequency Estimation and Tracking

After 2017’s feast with 14 entries, 2018 is a famine with only 3, two of which were ours and the third of which (which I can’t link to, because its abstract is missing) was restricted to a single subtask, in which it got reasonable results. Results pages are here and here.

Audio Onset Detection

Almost as many entries as last time, and a new convolutional network from Axel Röbel et al disrupts the tidy sweep of Sebastian Böck’s group at the top of the results table. Our simpler methods are squarely at the bottom this time around. Röbel’s submission has a nice informative abstract which casts more light on the detailed result sets and is well worth a read. Results here.

Audio Beat Tracking

Pure consolidation: all the 2018 entries are repeats from 2017, and all perform identically (with the methods from Böck et al doing better than our plugins). Every year I say that this doesn’t feel like a solved problem, and it still doesn’t — the results we’re seeing here still don’t seem all that close to human performance, but perhaps there are misleading properties to the evaluation. Results here, here, here.

Audio Tempo Estimation

This is a busier category, with a new dataset and a few new submissions. The new dataset is most intriguing: all of the submissions perform better with the new dataset than the older one, except for our QM Tempo Tracker plugin, which performs much, much worse with the new one than the old!

I believe the new dataset is of electronic dance music, so it’s likely that much of it is high tempo, perhaps tripping our plugin into half-tempo octave errors. We could probe this next time by tweaking the submission protocol a little. Submissions are asked to output two tempo estimates, and the results report whether either of them was correct. Because our plugin only produces one estimate, we lazily submit half of that estimate as our second estimate (with a much lower salience score). But if our single estimate was actually half of the “true” value, as is plausible for fast music, we would see better scores from submitting double instead of half as the second estimate.

Results are here and here.

Audio Key Detection

Some novelty here from a pair of template-based methods from the Universitat Autonoma de Barcelona, one attributed to Galin and Castells-Rufas and the other to Castells-Rufas and Galin. Their performance is not a million miles away from our own template-based key estimation plugin.

The strongest results appear to be from a neural network method from Korzeniowski et al at JKU, an updated version of one of last year’s better-performing submissions, an implementation of which can be found in the madmom library.

Results are here.

Audio Chord Estimation

A lively (or daunting) category. A team from Fudan University in Shanghai, whence came two of the previous year’s strongest submissions, is back with another new method, an even stronger set of results, and once again a very readable abstract; and the JKU team have an updated model, just as in the key detection category, which also performs extremely impressively. Meanwhile a separate submission from JKU, due to Stefan Gasser and Franz Strasser, would have been at the very top had it been submitted a year earlier, but is now a little way behind. Convolutional neural networks are involved in all of these.

Our Chordino submission can still be described as creditable. Results can be found here.

 

EasyMercurial v1.4

Today’s second post about a software release will be a bit less detailed than the first.

I’ve just coordinated a new release of EasyMercurial, a cross-platform user interface for version control software that was previously updated in February 2013. It looks a bit like this.

Screenshot from 2018-12-20 18-55-36

EasyMercurial was written with a bit of academic funding from the SoundSoftware project, which ran from 2010 to 2014. The idea was to make something as simple as possible to teach and understand, and we believed that the Mercurial version-control system was the simplest and safest to learn so we should base it on that. The concurrent rise of Github, and resulting dominance of Git as the version control software that everyone must learn, took the wind out of its sails. We eventually tacitly accepted that the v1.3 release made in 2013 was “finished”, and abandoned the proposed feature roadmap. (It’s open source, so if someone else wanted to maintain it, they could.)

EasyMercurial has continued to be a nice piece of software to use, and I use it myself on many projects, so when a recent change in the protocol support at the world’s biggest public Mercurial hosting site, Bitbucket, broke the Windows version of EasyMercurial 1.3, I didn’t mind having an excuse to update it. So now we have version 1.4.

This release doesn’t change a great deal. It updates the code to use the Qt5 toolkit and improves support for hi-dpi displays. I’ve dragged the packaging process up-to-date and re-packaged using current Qt, Mercurial (where bundled), and KDiff3 diff-merge code.

Mercurial usage itself has moved on in most quarters since EasyMercurial was conceived. EasyMercurial assumes that you’ll be using named branches for branching development, but these days using bookmarks for lightweight branching (more akin to Git branching) is more popular — EasyMercurial shows bookmarks but can’t do anything useful with them. Other features of modern Mercurial that could have been very helpful in a simple application like this, such as phases, are not supported at all.

Anyway: EasyMercurial v1.4. Free for Windows, Linux, and macOS. Get it here.

Sonic Visualiser v3.2

Another release of Sonic Visualiser is out. This one, version 3.2, has some significant visible changes, in contrast to version 3.1 which was more behind-the-scenes.

The theme of this release could be said to be “oversampling” or “interpolation”.

Waveform interpolation

Ever since the Early Days, the waveform layer in Sonic Visualiser has had one major limitation: you can’t zoom in any closer (horizontally) than one pixel per sample. Here’s what that looks like — this is the closest zoom available in v3.1 or earlier:

Screenshot from 2018-12-20 09-23-39

This isn’t such a big deal with a lower-resolution display, since you don’t usually want to interact with individual samples anyway (you can’t edit waveforms in Sonic Visualiser). It’s a bigger problem with hi-dpi and “retina” displays, on which individual pixels can’t always be made out.

Why this limitation? It allowed an integer ratio between samples and pixels to be used internally, which made it a bit easier to avoid rounding errors. It also sidestepped any awkward decisions about how, or whether, to show a signal in between the sample points.

(In a waveform editor like Audacity it is necessary to be able to interact with individual samples, so some decision has to be made about what to show between the sample points when zoomed in closely. Older versions of Audacity connected the sample points with straight lines, a decision which attracted criticism as misrepresenting how sampling works. More recent versions show sample points on separate stems without connecting lines.)

In Sonic Visualiser v3.2 it’s now possible to zoom closer than one pixel per sample, and we show the signal oversampled between the sample points using sinc interpolation. Here’s an example from the documentation, showing the case where the sample values are all zero but for a single sample with value 1:

The sample points are the little square dots, and the wiggly line passing through them is the interpolated signal. (The horizontal line is just the x axis.) The principle here is that, although there are infinitely many ways to join the dots, there is only one that is “smooth” enough to be expressible as a sum of sinusoids of no higher frequency than half the sampling rate — which is the prerequisite for reconstructing a signal sampled without aliasing. That’s what is shown here.

The above artificial example has a nice shape, but in most cases with real music the interpolated signal will not be very different from just joining the dots with a marker. It’s mostly relevant in extreme cases. Let’s replace the single sample of value 1 above with a pair of consecutive samples of value 0.5:

Screenshot from 2018-12-19 20-31-48

Now we see that the interpolated signal has a peak between the two samples with a greater level than either sample. The peak sample value is not a safe indication of the peak level of the analogue signal.

Incidentally, another new feature in v3.2 is the ability to import audio data from a CSV or similar data file rather than only from standard audio formats. That made it much easier to set up the examples above.

Spectrogram and spectrum oversampling

The other oversampling-related feature added in v3.2 appears in the spectrogram and spectrum layers. These layers now have an option to set an oversampling level, from the default “1x” up to “8x”.

This option increases the length of the short-time Fourier transform used to generate the spectrum, by padding the time-domain signal window with additional zero-valued samples before calculating the transform. This results in an oversampled frequency-domain output, with a higher visual resolution than would have been obtained from the original, un-zero-padded sample window. The result is a smoother spectrum in which the locations of peaks can be seen with a little more accuracy, somewhat like the waveform example above.

This is nice in principle, but it can be deceiving.

In the case of waveform oversampling, there can be only one “matching” signal, given the sample points we have and the constraints of the sampling theorem. So we can oversample as much as we like, and all that happens is that we approximate the analogue signal more closely.

But in a short-time spectrum or spectrogram, we only use a small window of the original signal for each spectrum or spectrogram-column calculation. There is a tradeoff in the choice of window size (a longer window gives better frequency discrimination at the expense of time discrimination) but the window always exposes only a small part of the original signal, unless that signal is extremely short. Zero-padding and using a longer transform oversamples the output to make it smoother, but it obviously uses no extra information to do it — it still has no access to samples that were not in the original window. A higher-resolution output without any more information at the input can appear more effective at discriminating between frequencies than it really is.

Here’s an example. The signal consists of a mixture of two sine waves one tone apart (440 and 493.9 Hz). A log-log spectrum (i.e. log frequency on x axis, log magnitude on y) with an 8192-point short-time Fourier transform looks like this:

Screenshot from 2018-12-19 21-25-02

A log-log spectrum with a 1024-point STFT looks like this1:

Screenshot from 2018-12-19 21-25-26

The 1024-sample input isn’t long enough to discriminate between the two frequencies — they’re close enough that it’s necessary to “hear” a longer fragment than this in order to determine that there are two frequencies at all2.

Add 8x oversampling to that last example, and it looks like this:

Screenshot from 2018-12-19 21-26-04

This is very smooth and looks super detailed, and indeed we can use it to read the peak value with more accuracy — but the peak is deceptive, because it is still merging the two frequency components. In fact most of the detail here consists of the frequency response of the 1024-point windowing function used to shape the time-domain window (it’s a Hann window in this case).

Also, in the case of peak frequencies, Sonic Visualiser might already provide a way to get the same information more accurately — its peak-frequency identification in both spectrum and spectrogram views uses phase unwrapping instead of spectrum interpolation to estimate the frequencies of stable harmonics, which gives very good results if the sound is indeed harmonic and stable.

Finally, there’s a limitation in Sonic Visualiser’s implementation of this oversampling feature that eliminates one potential use for it, which is to choose the length of the Fourier transform in order to align bin frequencies with known or expected frequency components of the signal. We can’t generally do that here, since Sonic Visualiser still only supports a few fixed multiples of a power-of-two window size.

In conclusion: interesting if you know what you’re looking at, but use with caution.


1 Notice that we are connecting sample points in the spectrum with straight lines here — the same thing I characterised as a bad idea in the discussion of waveforms above. I think this is more forgivable here because the short-time transform output is not a sampled version of an original signal spectrum, but it’s still a bit icky

2 This is not exactly true, but it works for this example

Rubber Band Library v1.8.2

I have finally managed to get together all the bits that go into a release of the Rubber Band library, and so have just released version 1.8.2.

The Rubber Band library is a software library for time-stretching and pitch-shifting of audio, particularly music audio. That means that it takes a recording of music and adjusts it so that it plays at a different speed or at a different pitch, and if desired, it can do that by changing the speed and pitch “live” as the music plays. This is impossible to do perfectly: essentially you are asking software to recreate what the music would have sounded like if the same musicians had played it faster, slower, or in a different key, and there just isn’t enough information in a recording to do that. It changes the sound and is absolutely not a reversible transformation. But Rubber Band does a pretty nice job. For anyone interested, I wrote a page (here) with a technical summary of how it does it.

I originally wrote this library between 2005 and 2007, with a v1.0 release at the end of 2007. My aim was to provide a useful tool for open source GPL-licensed audio applications on Linux, like Ardour or Rosegarden, with a commercial license as an afterthought. As so often happens, I seriously underestimated the work involved in getting the library from “working” (a few weeks of evening and weekend coding) to ready to use in production applications (two years).

It has now been almost six years since the last Rubber Band release, and since this one is just a bugfix release, we can say the library is pretty much finished. I would love to have the time and mental capacity for a version 2: there are many many things I would now do differently. (Sadly, the first thing is that I wouldn’t rely on my own ears for basic testing any more—in the intervening decade my hearing has deteriorated a lot and it amazes me to think that I used to accept it as somehow authoritative.)

In spite of all the things I would change, I think this latest release of version 1 is pretty good. It’s not the state-of-the-art, but it is very effective, and is in use right now in professional audio applications across the globe. I hope it can be useful to you somehow.

 

Repoint: A manager for checkouts of third-party source code dependencies

I’ve just tagged v1.0 of Repoint, a tool for managing library source code in a development project. Conceptually it sits somewhere between Mercurial/Git submodules and a package manager like npm. It is intended for use with languages or environments that don’t have a favoured package manager, or in situations where the dependent libraries themselves aren’t aware that they are being package-managed. Essentially, situations where you want, or need, to be a bit hands-off from any actual package manager. I use it for projects in C++ and SML among other things.

Like npm, Bundler, Composer etc., Repoint refers to a project spec file that you provide that lists the libraries you want to bring in to your project directory (and which are brought in to the project directory, not installed to a central location). Like them, it creates a lock file to record the versions that were actually installed, which you can commit for repeatable builds. But unlike npm et al, all Repoint actually does is clone from the libraries’ upstream repository URLs into a subdirectory of the project directory, just as happens with submodules, and then report accurately on their status compared with their upstream repositories later

The expected deployment of Repoint consists of copying the Repoint files into the project directory, committing them along with everything else, and running Repoint from there, in the manner of a configure script — so that developers generally don’t have to install it either. It’s portable and it works the same on Linux, macOS, or Windows. Things are not always quite that simple, but most of the time they’re close.

At its simplest, Repoint just checks stuff out from Git or whatever for you, which doesn’t look very exciting. An example on Windows:

repoint

Simple though Repoint’s basic usage is, it can run things pretty rigorously across its three supported version-control systems (git, hg, svn), it gets a lot of annoying corner cases right, and it is solid, reliable, and well-tested across platforms. The README has more documentation, including of some more advanced features.

Is this of any use to me?

Repoint might be relevant to your project if all of the following apply:

  • You are developing with a programming language or environment that has no obvious single answer to the “what package manager should I use?” question; and
  • Your code project depends on one or more external libraries that are published in source form through public version-control URLs; and
  • You can’t assume that a person compiling your code has those libraries installed already; and
  • You don’t want to copy the libraries into your own version-control repo to form a Giant Monorepo; and
  • Most of your dependent libraries do not similarly depend on other libraries (Repoint doesn’t support recursive dependencies at all).

Beyond mere relevance, Repoint might be actively useful to your project if any of the following also apply:

  • The libraries you’re using are published through a mixture of version-control systems, e.g. some use Git but others Mercurial or Subversion; or
  • The libraries you’re using and, possibly, your own project might change from one version-control system to another at some point in the future.

See the README for more caveats and general documentation.

Example

The biggest current example of a project using Repoint is Sonic Visualiser. If you check out its code from Github or from the SoundSoftware code site and run its configure script, it will call out to repoint install to get the necessary dependencies. (On platforms that don’t use the configure script, you have to run Repoint yourself.)

Note that if you download a Sonic Visualiser source code tarball, there is no reference to Repoint in it and the Repoint script is never run — Repoint is very much an active-developer tool, and it includes an archive function that bundles up all the dependent libraries into a tarball so that people building or deploying the end result aren’t burdened with any additional utilities to use.

I also use Repoint in various smaller projects. If you’re browsing around looking at them, note that it wasn’t originally called Repoint — its working title in earlier versions was vext and I haven’t quite finished switching the repos over. Those earlier versions work fine of course, they just use different words.