Donald Feury

Software Development, Linux, and Gaming

Go Modules

Youtube Video

Managing dependencies in go used to be a mess. There were about 3-4 widely used packages to manage dependencies, which sometimes made it weird to work on another code base that used a different dependency management tool.

Finally in 1.11, go included its own dependency management tool called go modules. Go modules is now the standard way of handling your go project's dependencies.

In this video, I go a brief run down of how to add dependencies to your project, get a better understand of why certain packages are being included, and how to trim old dependencies out of your project.

Check it out and let me know what ya'll think. Any feedback is greatly appreciated.

If you liked it and want to know when I post more, be sure to subscribe and thank ya'll again for your time!

#go #golang #modules

Youtube Video

Despite not having Object Oriented Programming (OOP) language features, Go does have interfaces.

Interfaces allow you to interchange what data types you are passing as arguments to a function, as long as that type has the required methods.

Interfaces in Go are different compared to, say, C# or Java, in that you don't have to explicitly state you are implementing an interface, which is pretty cool.

Check it out and let me know what ya'll think. Any feedback is greatly appreciated.

If you liked it and want to know when I post more, be sure to subscribe and thank ya'll again for your time!

#go #golang #interfaces

Youtube Video

Code available here

For my next go tutorial, I decided to actually demonstrate how to create a program. In this case, a REST API.

But not just any REST API, oh no. This one is built using only the standard package library available in go. While there are really nice resources for doing this, such as gorilla/mux, I wanted to demonstrate the power of the standard package library in go.

Check it out and let me know what ya'll think. Any feedback is greatly appreciated.

If you liked it and want to know when I post more, be sure to subscribe and thank ya'll again for your time!

#go #golang #rest

YouTube Video

My second programming tutorial on go, this time on the basics of its concurrency primitives.

I applied some of the feedback from the first video and applied it to this one.

I contemplated talking about a few concurrency patterns commonly used in go in this video as well, but decided it would be better to do that in a separate video. That way I can give each the time it deserves.

Let me know what ya'll think about it.

If you liked it and want to know when I post more, be sure to subscribe and thank ya'll again for your time!

#go #golang #concurrency

YouTube Video

I made my first “comprehensive” video tutorial relating to programming.

In this one I explain how to utilize the basic language features of go, as fast as I can, without making tons of mistakes.

I want to make more and better video tutorials like this one and would greatly appreciate any feedback about it.

If you liked it and want to know when I post more, be sure to subscribe and thank ya'll for taking the time to look over it!

#go #golang

YouTube Video

I recently started building up a lot of small little go packages that will lead up to a more comprehensive automated video editing tool I want to build.

While creating these small packages, I wanted to create a simple cli program to interact with each and do it more properly than I had with the shell scripts.

I wanted to start by handling arguments in a more proper way, namely, parsing flags passed to the program.

Turns out, go already has a pretty nice package in the standard library for this called flag.

Setup Valid Flags

So, how do we go about declaring what flags we want to accept? Well there are two main approaches you can use.

myStrFlag := flag.String("mystr", "", "This is my string flag")

This approach declare a flag called “mystr” and when your arguments are parsed, will return a pointer to the value the function returned. The second argument is the default value if no value is given for that flag. The final argument is the message displayed when the usage info for the flags is shown. (more on that later)

There is a similar way to achieve this:

var myStrFlag string
flag.StringVar(&myStrFlag, "mystr", "", "This is my string")

This is almost identical to the first way, except instead of the function returning a pointer, we give it a pointer to an existing variable and the value is stored in that variable using the pointer. I prefer this approach myself.

Flag Argument Formats

Using the previous example, if we declare a valid flag called “mystr”, the flag can be passed in several formats that are valid.

  • -mystr=value
  • --mystr=value
  • -mystr value
  • --mystr value

If the flag is a boolean flag, these are the valid formats:

  • -mybool
  • --mybool
  • -mybool=value
  • --mybool=value

You'll notice with boolean flags we can't do the -flag value syntax. I already lost about an hour one day not realizing this when the rest of the flags weren't being parsed. Turns out I was accidentally trying to use boolean flag like any other flag and passing the value with a space, which causes the flag parsing to stop.

Flag Usage

One of the great advantages of the flag package is it makes it super easy to print the usage info of your flags!

By default, if you want to display the usage information, all you have to do is do:

flag.PrintDefaults()

This will print out each flag you've defined, along with that help message you pass in as the last argument to each flag definition.

Now, if you want to change what gets printed when printing the usage, you can override it easily.

flag.Usage = func() {
  fmt.Println("This is my new help message")
  fmt.Println("This will get displayed instead of the default messages")
}

Parsing Arguments

So we've declared what our flags our and setup our usage message, how do we actually parse the program arguments into our variables? Thankfully, with one function call:

flag.Parse()

That is it, that is all you had to do and tada, you got values stored in your variables!

Now, you noticed the flags are actually typed as well (StringVar, Intvar, etc), while parsing the arguments, if someone gave a file path for a integer variable, the program will automatically display an error message, print the default usage and exit! That is a lot of functionality right out of the box for basically free.

Summary

You should now be able to:

  • Define a set of valid flag arguments for your program
  • Have default values and usage information for said flags
  • Parse the program arguments into your variables
  • Print the usage information if needed

If you have any further questions about what you can do with the flag package, just check the docs.

Thank you for your time!

#golang #go

YouTube Video

I recently updated my python script for automatically removing the silence parts of a video.

Previously, I had to call the shell script separate to generate the silence timestamps.

Now, the python script grabs the output of the shell script directly using subprocess run.

Script

#!/usr/bin/env python

import sys
import subprocess
import os
from moviepy.editor import VideoFileClip, concatenate_videoclips

input_path = sys.argv[1]
out_path = sys.argv[2]
threshold = sys.argv[3]
duration = sys.argv[4]

try:
    ease = float(sys.argv[5])
except IndexError:
    ease = 0.2

minimum_duration = 1.0

def generate_timestamps(path, threshold, duration):
    command = "detect_silence {} {} {}".format(input_path, threshold, duration)
    output = subprocess.run(command, shell=True, capture_output=True, text=True)
    return output.stdout.split('\n')[:-1]


def main():
    count = 0
    last = 0
    timestamps = generate_timestamps(input_path, threshold, duration)
    print("Timestamps: {}".format(timestamps))
    video = VideoFileClip(input_path)
    full_duration = video.duration
    clips = []

    for times in timestamps:
        end,dur = times.strip().split()
        print("End: {}, Duration: {}".format(end, dur))

        to = float(end) - float(dur) + ease

        start = float(last)
        clip_duration = float(to) - start
        # Clips less than one seconds don't seem to work
        print("Clip Duration: {} seconds".format(clip_duration))

        if clip_duration < minimum_duration:
            continue

        if full_duration - to < minimum_duration:
            continue


        print("Clip {} (Start: {}, End: {})".format(count, start, to))
        clip = video.subclip(start, to)
        clips.append(clip)
        last = end
        count += 1

    if not clips:
        print("No silence detected, exiting...")
        return


    if full_duration - float(last) > minimum_duration:
        print("Clip {} (Start: {}, End: {})".format(count, last, 'EOF'))
        clips.append(video.subclip(last))

    processed_video = concatenate_videoclips(clips)
    processed_video.write_videofile(
        out_path,
        fps=60,
        preset='ultrafast',
        codec='libx264',
        audio_codec='aac'
    )

    video.close()


main()

I won't go over this in full detail, as I did that in the last post about the silence trimming script. I will break down the changes I made.

For a break down of the scripts in more detail, check out the last post I made about it.

{% link https://dev.to/dak425/automatically-trim-silence-from-video-with-ffmpeg-and-python-2kol %}

Changes

def generate_timestamps(path, threshold, duration):
    command = "detect_silence {} {} {}".format(input_path, threshold, duration)
    output = subprocess.run(command, shell=True, capture_output=True, text=True)
    return output.stdout.split('\n')[:-1]

Here I created a function to pass in the arguments needed by the detect silence script, and execute it using subprocess.run.

It needs the capture_output=True to actually save the output, and text=True to make the output be in the form of a string, otherwise its returned as raw bytes.

I then split on the newlines and remove the last entry, as its an empty string that is not needed.

Since we are grabbing the script output straight from stdout, we no longer have to open and read an arbitrary text file to get the timestamps.

One last change was, before I was adding a padding to the start of the next clip, to make the transitions less abrupt. Now I add it the end of the last clip, as it seems more natural.

if not clips:
        print("No silence detected, exiting...")
        return

I also added this sanity check to make sure there were actually clips generated, can't concatenate clips that don't exist.

Thats it! Now I can remove the silence parts of a video by calling only script! It also avoids having to create the intermittent timestamp file as well.

#ffmpeg #python #videoediting

YouTube Video

I had someone email me asking how to solve the following problem:

I would like to take video A, video B, and replace the audio in video A with the audio from video B

The approach they were trying was as follows:

  1. Extract only the video from video A
  2. Extract only the audio from video B, while also transcoding it a codec he needed it to be in
  3. Merge these two files together

Now, this approach is fine but he encountered an issue. He claimed to need the audio in the a WAV file format but the WAV format wasn't compatible with the codec he needed to transcode the audio into.

So what does he do?

I showed him you can do all this in one command, avoiding this file format issue, while also not creating the intermittent files.

Let me show you the example I showed him and I will break it down.

VIDEO=$1
AUDIO=$2
OUTPUT=$3

ffmpeg -an -i $VIDEO -vn -i $AUDIO -map [0:v] -map [1:a] -c:v copy -c:a pcm_s8_planar $OUTPUT

VIDEO=$1

This is the file he wants to use the video stream from, so in his case its video A.

AUDIO=$2

This is the file he wants to use the audio from, making this video B.

OUTPUT=$3

The file path to save the combined result to.

-an -i $VIDEO

The -an option before the input means ignore the audio stream. This will give us only the video stream for this file. It also speeds up the command by avoiding having to read the audio stream.

-vn -i $AUDIO

The -vn option before the input means ignore the video stream. This will give us only the audio stream for this file. It also speeds up the command by avoiding having to read the video stream.

-map [0:v] -map [1:a]

The -map options make it so that we explicitly tell ffmpeg what streams of data to write out to the output, instead of it just figuring it out. This may have not been needed but I'd rather be explicit when I need to be.

-c:v copy -c:a pcms8planar $OUTPUT

The -c:v copy option makes so ffmpeg just copies the video stream over, avoiding a decode and re-encode. This makes it really fast.

the -c:a pcms8planar option transcodes the audio stream to the codec he needed it to be in.

Lastly, we just tell ffmpeg to use the output path given

aaannnddddd...

Drum roll please...

It worked like a charm! He was very happy to be able to continue with his project.

#ffmpeg #videoediting

YouTube Video

Even though I'm only about one month into my YouTube channel, I thought some people might interested about how I go about creating and uploading the videos and thumbnails.

To summarize the video above, the process goes like this:

  1. Record the video using OBS, usually only involves one file but there are a few times I needed to start and stop again.

  2. If multiple recordings were done, concat them together using the demuxer concat method.

  3. Find the timestamp in seconds in my video for the plug. (a.k.a like, sub, blah blah)

  4. Run the video through my finalize video script to add in the fade in and out, overlay the sub animation, and append the outro.

  5. While its processing, take dumb snapshot with webcam.

  6. Edit snapshot in Gimp

    • Do a little color correction and probably brighten the image.
    • Crop it down to 1280x720, keeping my face to the right side of the image.
    • Add text to the left side of the image, usually some variant of the video title.
    • Put a box behind the text to give them some contrast.
    • Apply an images if applicable. (ex. ffmpeg logo, YouTube logo)
    • ... Thats it, usually takes like five minutes.
  7. Quickly check over video once its done processing.

  8. Upload if it looks good.

  9. Do the usual SEO stuff (tags, description, title)

  10. Add thumbnail exported from Gimp

  11. Once the video is processed on YouTube, add the end screen where the fade in starts.

  12. Add any cards if applicable, such as references to other videos. I always add the relevant playlist to the start of the video.

  13. Content!

Thats it for now. Once I get consistent lighting in my office, I'm gonna add proper color correcting to the finalization script.

#videoediting #productivity

Odysee YouTube


I finally did it, I managed to figure out a little process to automatically remove the silent parts from a video.

Let me show ya'll the process and the two main scripts I use to accomplish this.

Process

  1. Use ffmpeg's silencedetect filter to generate output of sections of the video's audio with silence
  2. Pipe that output through a few programs to get the output in the format that I want
  3. Save the output into a text file
  4. Use that text file in a python script that sections out the parts of the video with audio, and save the new version with the silence removed

Now, with the process laid out, lets look at the scripts doing the heavy lifting.

Scripts

Here is the script for generating the silence timestamp data:

#!/usr/bin/env sh

IN=$1
THRESH=$2
DURATION=$3

ffmpeg -hide_banner -vn -i $IN -af "silencedetect=n=${THRESH}dB:d=${DURATION}" -f null - 2>&1 | grep "silence_end" | awk '{print $5 " " $8}' > silence.txt

I'm passing in three arguments to this script: * IN – the file path to the video I want to analyze

  • THRESH – the volume threshold the filter uses to determine what counts as silence

  • DURATION – the length of time in seconds the audio needs to stay below the threshold to count as a section of silence

That leaves us with the actual ffmpeg command:

ffmpeg -hidebanner -vn -i $IN -af “silencedetect=n=${THRESH}dB:d=${DURATION}” -f null – 2>&1 | grep “silenceend” | awk '{print $5 “ ” $8}' > silence.txt

  • -hide_banner – hides the initial dump of info ffmpeg shows when you run it

  • -vn – ignore the input file's video stream, we only need the audio and ignoring the video stream speeds up the process alot as it doesn't need to demux and decode the video stream.

  • -af "silencedetect=n=${THRESH}dB:d=${DURATION}" – detects the silence in the audio and displays the ouput in stdout, which I pipe to other programs

The output of silencedetect looks like this: Silencedetect Example Output

  • -f null - 2>&1 – do not write any streams out and ignore error messages. To keep the output as clean as possible

  • grep "silence_end" – we first pipe the output to grep, I only want the lines that have that part that says “silence_end”

  • awk '{print $5 " " $8}' > silence.txt – Lastly, we pipe that output to awk and print the fifth and eighth values to a text file

The final output looks like this:

86.7141 5.29422
108.398 5.57798
135.61 1.0805
165.077 1.06485
251.877 1.11594
283.377 5.21286
350.709 1.12472
362.749 1.24295
419.726 4.42077
467.997 5.4622
476.31 1.02338
546.918 1.35986

You might ask, why did I not grab the silence start timestamp? That is because those two numbers I grabbed were the ending timestamp and the duration. If I just subtract the duration from the ending timestamp, I get the starting timestamp!

So finally we get to the python script that processes the timestamps. The script makes use of a python library called moviepy, you should check it out!

#!/usr/bin/env python

import sys
import subprocess
import os
import shutil
from moviepy.editor import VideoFileClip, concatenate_videoclips

# Input file path
file_in = sys.argv[1]
# Output file path
file_out = sys.argv[2]
# Silence timestamps
silence_file = sys.argv[3]

# Ease in duration between cuts
try:
    ease = float(sys.argv[4])
except IndexError:
    ease = 0.0

minimum_duration = 1.0

def main():
    # number of clips generated
    count = 0
    # start of next clip
    last = 0

    in_handle = open(silence_file, "r", errors='replace')
    video = VideoFileClip(file_in)
    full_duration = video.duration
    clips = []
    while True:
        line = in_handle.readline()

        if not line:
            break

        end,duration = line.strip().split()

        to = float(end) - float(duration)

        start = float(last)
        clip_duration = float(to) - start
        # Clips less than one seconds don't seem to work
        print("Clip Duration: {} seconds".format(clip_duration))

        if clip_duration < minimum_duration:
            continue

        if full_duration - to < minimum_duration:
            continue

        if start > ease:
            start -= ease

        print("Clip {} (Start: {}, End: {})".format(count, start, to))
        clip = video.subclip(start, to)
        clips.append(clip)
        last = end
        count += 1

    if full_duration - float(last) > minimum_duration:
        print("Clip {} (Start: {}, End: {})".format(count, last, 'EOF'))
        clips.append(video.subclip(float(last)-ease))

    processed_video = concatenate_videoclips(clips)
    processed_video.write_videofile(
        file_out,
        fps=60,
        preset='ultrafast',
        codec='libx264'
    )

    in_handle.close()
    video.close()

main()

Here I pass in 3 required and 1 optional argument:

  • file_in – the input file to work on, should be the same as the one passed into the silence detection script

  • file_out – the file path to save the final version to

  • silence_file – the file path to the file generated by the silence detection

  • ease_in – a work in progress concept. I noticed the jumps between the clips is kinda sudden and too abrupt. So I want to add about half a second of padding to when the next clip is suppose to start to make it less abrupt.

You will see there is a minimum_duration, that is because I found in testing that moviepy will crash when trying to write out a clip that is less than a second. There are a few sanity checks using that to determine if a clip should be extracted yet or not. That part is very rough still though.

I track when the next clip to be written out should start in the last variable, to track when the last section of silence ended.

The logic for writing out clips works like so:

  • Get the starting timestamp of silence

  • Write out a clip from the end of the last section of silence, until the start of the next section of silence, and store it in a list

  • Store the end of the next section of silence in a variable

  • Repeat until all sections of silence are exhausted

Last we write the remainder of the video to the last clip, use the concatenate_vidoeclips function from moviepy to pass in a list of clips and combine them into one video clip, and call the write_videofile method of VideoClip class to save the final output to the out path I passed into the script.

Tada! You got a new version of the video with the silent parts removed!

I will try to show a before and after video of the process soon.

#ffmpeg #python #videoediting