Category: Blog

  • 21sh

    21sh

    Project from UNIX branch in school 42

    Project 21sh is the continuation of the project minishell

    New in 21sh project

    Mandatory implementations

    [✓] Pipes ‘|

    [✓] The 4 following redirections ‘<‘, ‘>‘, ‘<<‘ and ‘>>

    [✓] File descriptor aggregation ‘<&‘ and ‘>&

    [✓] Basic line editing features using termcap library

    Bonuses

    [✓] Implementation of Hash Table for binary files, also the hash builtin command

    [✓] Advanced auto completion using tab

    [✓] Search through history using ctrl+R

    [✓] Some other line editing features (see below)

    Hotkey list for line editing:

    Key Details
    Left move cursor backward by one character
    Right move cursor forward by one character
    Up List term history up
    Down List term history down
    Ctrl + Left ⌃← move cursor backward by one word
    Ctrl + Right ⌃→ move cursor forward by one word
    Ctrl + Up ⌃↑ move cursor backward by one row
    Ctrl + Down ⌃↓ move cursor forward by one row
    Ctrl + Shift + Left ⌃⇧← delete the word in front of the cursor
    Ctrl + Shift + Right ⌃⇧→ delete the word after the cursor
    Ctrl + Shift + Up ⌃⇧↑ delete the row in front of the cursor
    Ctrl + Shift + Down ⌃⇧↓ delete the row after the cursor
    Return Confirm line entry
    Backspace delete one previous character from current position of cursor
    Delete delete one character from current position of cursor
    Home move cursor to the beginning of the line
    End move cursor to the end of the line
    Tab Auto compilation
    Ctrl + R ⌃R Search history
    Ctrl + A ⌃A work same as Home
    Ctrl + E ⌃E work same as End
    Ctrl + U ⌃U clear all characters in front the cursor
    Ctrl + K ⌃K clear all characters after the cursor
    Ctrl + G ⌃G clear all characters in line
    Ctrl + H ⌃H Undo the last change
    Ctrl + L ⌃L clear screen

    Preview

    Preview output

    ➜  21sh git:(master) ./21sh
    ✓ (21sh) cd /tmp/test_dir/
    ✓ (test_dir) pwd
    /tmp/test_dir
    ✓ (test_dir) env
    TERM_SESSION_ID=w0t0p0:D3D7901C-F606-4245-89ED-C2B1F3E713F3
    SSH_AUTH_SOCK=/private/tmp/com.apple.launchd.ldiuufG508/Listeners
    LC_TERMINAL_VERSION=3.3.6
    Apple_PubSub_Socket_Render=/private/tmp/com.apple.launchd.lcd6sflWma/Render
    COLORFGBG=7;0
    ITERM_PROFILE=Default
    XPC_FLAGS=0x0
    PWD=/tmp/test_dir
    SHELL=21sh
    LC_CTYPE=UTF-8
    TERM_PROGRAM_VERSION=3.3.6
    TERM_PROGRAM=iTerm.app
    PATH=/usr/local/bin:/usr/bin:/bin:/usr/sbin:/sbin:/usr/local/mysql/bin
    LC_TERMINAL=iTerm2
    COLORTERM=truecolor
    TERM=xterm-256color
    HOME=/Users/prippa
    TMPDIR=/var/folders/nc/lc4x38yx18sgjmwh0qyy9prw0000gn/T/
    USER=prippa
    XPC_SERVICE_NAME=0
    LOGNAME=prippa
    ITERM_SESSION_ID=w0t0p0:D3D7901C-F606-4245-89ED-C2B1F3E713F3
    __CF_USER_TEXT_ENCODING=0x0:7:49
    SHLVL=2
    OLDPWD=/Users/prippa/Desktop/21sh
    LC_ALL=en_US.UTF-8
    LANG=en_US.UTF-8
    ZSH=/Users/prippa/.oh-my-zsh
    PAGER=less
    LESS=-R
    LSCOLORS=Gxfxcxdxbxegedabagacad
    _=env
    ✓ (test_dir) touch riri
    ✓ (test_dir) ls
    riri
    ✓ (test_dir) rm riri ;cat riri 2>&-
    ✕ (test_dir) cat riri
    cat: riri: No such file or directory
    ✕ (test_dir) mkdir test; cd test; ls -a; ls | cat | wc -c > fifi; cat fifi
    .	..
           5
    ✓ (test) pwd
    /tmp/test_dir/test
    ✓ (test) echo '
    quote> Hello
    quote> World
    quote> !
    quote> '; echo "awesome $SHELL";\
    > exit
    
    Hello
    World
    !
    
    awesome 21sh
    exit
    ➜  21sh git:(master)
    

    more info

    Visit original content creator repository

  • MameRomCheck

    [ updated 1st of March 2020 ]

    MameRomCheck

    yet another tool to manage Mame Roms – ui and api

    it has a gui (tkinter or gtk) and an api. This is very early release, but non destructive,
    since it only get informations and dont touch anything (for now..)

    Goal is to Provide a ‘simple’ user interface and a python api to get efficient romset listing inside Mame
    without weeks of self-education about how Mame work :

    • check wether a rom is really playable (not only marked as working)
    • check several romsets against several releases of Mame
    • remove duplicates, move roms in the right place
    • integrate into existing mame tools eco-system
    • click on a romset and run the compatible Mame release.

    Goal is not to compete with very powerful analysis tools like clrMamePro or romCenter.
    But both needs a lot of self eductation about how Mame work, and so far I was not able to automatically check wether a romset
    is really playable or not / to only keep the playable romsets in my roms directories.

    environment(s) :

    • Windows 10 for now

    for GTK interface :

    • python3.8.2 from mingw64
    • the GTK library

    for TKinter interface :

    • python3.7+ for windows from python.org
    • PIL (pip install pillow)

    this should work in other context, provided you use a Python 3.7+ release, but didn’t get the chance to test yet.

    installation :

    • unzip master and open a cmd shell.
    • it needs 3 files from the 7-zip project to run : these can be found in the 7zip 19.00 64-bit x64 version, on the 7-zip.org website :
      https://www.7-zip.org/download.html. direct download link
    • the 7z1900-x64.exe can be open as an archive with 7zip to get the files
    • If you didnt have 7-zip then you should :). Install it (and uninstall winrar, winzip etc once confident).
    • copy 7z.exe, 7z.dll and License.txt from in the ./bin directory
    • cd to the top level of the directory then :
    python bin
    

    The GTK user interface will open

    python bin/tk.py
    

    The tkinter user interface will open. Please note I work first on the GTK ui for now, (2020/03/01)
    so everything wont work as it should with tkinter.

    add Mame installations

    • click the add button under the Mame releases area and browse to your Mame64.exe runtime,
    • the guess release button should populate the name and version field (Version field is important you should let it as is)
    • click ok : the Mame release is added. in the Romset folders view, the roms folder(s) defined in mame.ini should appear.
    • the ‘M’ icon means the corresponding folder is used by the currently selected Mame release. Try to add another Mame release to get it.
    • one can add additional romset folders which are not used by any Mame installation
    • name and/or path MUST be different for Mame and folders, no duplicates.

    list romsets

    • click on a romset folder to see what it contains
    • in the romsets view, click on a romset to populate some information. This makes use of the 7z.exe and dll files.
    • the update button refresh the romset list if needed.
    • the verify button test the romset with the currently selected Mame release.
    • the verify all button test all the romsets of the current folder with the currently selected Mame release. this is multithreaded but this is uncompiled python code : is takes around 25 seconds for 800 romsets on a i7.
    • use the save button to save the informations gathered.

    run

    • a romset will be ok if the driver is reported as ‘good’ in the romset view/driver column (but this need a couple more tests to be sure [bios tests, split romset etc…])
    • try the ‘Run with Mame xxx’ button

    API

    Actually the core code is apart from the interface so on could script operations or integrate in another python project.

    to run and play from a python console :

    python -i bin/mameromcheck.py
    

    the ui’s make use of the methods listed below, so can you.

    Mame.list()         # list your Mame Releases (ordered as in the conf.default.tab file and the ui)
    Romdir.list()       # list your romset folders (ordered as in the conf.default.tab file and the ui)
    m = Mame.get(0)     # get the first Mame installation you defined. Mame.get(name) also work.
    m2 = Mame("C:\\ ... \\mame64.exe")  # create a new mame installation
    m2.name = 'my Mame'                 # add a name if you want to save it later
    
    m.run()             # run Mame
    rd = Romdir.get(1)  # get the second romset folder you defined. with name also.
    rd.romset.keys()    # romsets names in this folder
    rd.populate()       # update the romset of this folder
    rd.verify(0)        # silently run a -listxml Mame command on 
    r = rd.romset['romsetname']  # get a romset
    r.verify(0)         # verify it with the first Mame installation
    m.activate()        # romsets will return verification results of the first Mame release ( since m = Mame.get(0) )
    r.driver            # if 'good' is returned, then this should be ok for this release
    
    r.verify(1)            # verify it with the second Mame installation
    Mame.get(1).activate() # romsets will return verification results of the second installation
    r.driver               # so this could be a different result than previously
    r.run(0)               # run the romset with the first Mame release
    r.description
    r.roms              # dictionnary with rom information (crc)
    ...
    
    task.info()     # infos about parrallel threads and current tasks
    task.maxtasks   # max number of parrallel threads. used by ROmdir.verify(), default 5, use with caution (10 means 40% cpu on a i7 8gen. and is ok for me)
    task.verbose    # True|False
    ...
    
    cfg.save()      # save everything in conf/default.tab. file is human readable
    
    [! WIP WIP WIP !]

    Visit original content creator repository

  • chrono

    Project CHRONO

    pipeline status BSD License

    Distributed under a permissive BSD license, Chrono is an open-source multi-physics package used to model and simulate:

    • dynamics of large systems of connected rigid bodies governed by differential-algebraic equations (DAE)
    • dynamics of deformable bodies governed by partial differential equations (PDE)
    • granular dynamics using either a non-smooth contact formulation resulting in differential variational inequality (DVI) problems or a smooth contact formulation resulting in DAEs
    • fluid-solid interaction problems whose dynamics is governed by coupled DAEs and PDEs
    • first-order dynamic systems governed by ordinary differential equations (ODE)
    • sensors (camera, LiDAR, GPS, IMU, SPAD) to support simulation in robotics and autonomous agents via a ROS2 interface

    Chrono provides a mature and stable code base that continues to be augmented with new features and modules. The core functionality of Chrono provides support for the modeling, simulation, and visualization of rigid and flexible multibody systems with additional capabilities offered through optional modules. These modules provide support for additional classes of problems (e.g., granular dynamics and fluid-solid interaction), modeling and simulation of specialized systems (such as ground vehicles and robots), co-simulation, run-time visualization, post-processing, interfaces to external linear solvers, or specialized parallel computing algorithms (multi-core, GPU, and distributed) for large-scale simulations.

    Used in many different scientific and engineering problems by researchers from academia, industry, and federal government, Chrono has mature support for multibody dynamics, finite element analysis, granular dynamics, fluid-solid interaction, ground vehicle simulation, robotics, embodied AI, and terramechanics.

    Implemented almost entirely in C++, Chrono also provides Python and C# APIs. The build system is based on CMake. Chrono is platform-independent and is actively tested on Linux, Windows, and MacOS using a variety of compilers.

    Documentation

    Support

    Note on Chrono repository structure

    The structure of the Chrono git repository was changed as follows:

    • The main development branch is now called main (previously develop)
    • The master branch, now obsolete, was deleted
    • Releases are located in branches named release/*.* and have tags of the form *.*.*
    Visit original content creator repository
  • mediator.dart

    Mediator.dart

    Dart pub package codecov

    Description

    A Mediator implementation for Dart inspired by MediatR.

    This package provides a simple yet configurable solution.

    Features

    • Request/Response
    • Commands
    • Request/Command Pipelines
    • Events
    • Event Observers

    Sending events

    An event can have multiple handlers. All handlers will be executed in parallel (by default).

    import 'package:dart_mediator/mediator.dart';
    
    /// Strongly typed event class containing the event data.
    /// All events must implement the [DomainEvent] interface.
    class MyEvent implements DomainEvent {}
    
    Future<void> main() async {
      final mediator = Mediator.create();
    
      // Subscribe to the event.
      mediator.events.on<MyEvent>()
        .subscribeFunction(
          (event) => print('event received'),
        );
    
      // Sends the event to all handlers.
      // This will print 'event received'.
      await mediator.events.dispatch(MyEvent());
    }

    Sending Commands

    A command can only have one handler and doesn’t return a value.

    /// This command will not return a value.
    class MyCommand implements Command {}
    
    class MyCommandHandler implements CommandHandler<MyCommand> {
      @override
      FutureOr<void> handle(MyCommand request) {
        // Do something
      }
    }
    
    Future<void> main() async {
      final mediator = Mediator.create();
    
      mediator.requests.register(MyCommandHandler());
    
      /// Sends the command request. Return value is [void].
      await mediator.requests.send(MyCommand());
    }

    Sending Requests

    A request can only have one handler and returns a value.

    import 'package:dart_mediator/mediator.dart';
    
    class Something {}
    
    /// This query will return a [Something] object.
    class MyQuery implements Query<Something> {}
    
    class MyQueryHandler implements QueryHandler<Something, MyQuery> {
      @override
      FutureOr<Something> handle(MyQuery request) {
        // do something
        return Something();
      }
    }
    
    Future<void> main() async {
      final mediator = Mediator.create();
    
      mediator.requests.register(MyQueryHandler());
    
      // Sends the query request and returns the response.
      final Something response = await mediator.requests.send(MyQuery());
    
      print(response);
    }

    Event Observers

    An observer can be used to observe events being dispatched, handled or when an error occurs. For example logging events.

    class LoggingEventObserver implements EventObserver {
    
      /// Called when an event is dispatched but before any handlers have
      /// been called.
      @override
      void onDispatch<TEvent>(
        TEvent event,
        Set<EventHandler> handlers,
      ) {
        print(
          '[LoggingEventObserver] onDispatch "$event" with ${handlers.length} handlers',
        );
      }
    
      /// Called when an event returned an error for a given handler.
      @override
      void onError<TEvent>(
        TEvent event,
        EventHandler handler,
        Object error,
        StackTrace stackTrace,
      ) {
        print('[LoggingEventObserver] onError $event -> $handler ($error)');
      }
    
      /// Called when an event has been handled by a handler.
      @override
      void onHandled<TEvent>(
        TEvent event,
        EventHandler handler,
      ) {
        print('[LoggingEventObserver] onHandled $event -> $handler');
      }
    }
    
    void main() {
      final mediator = Mediator.create(
        // Adds the logging event observer.
        observers: [LoggingEventObserver()],
      );
    
      // Dispatch an event.
    }
    

    Request/Command Pipeline Behavior

    A pipeline behavior can be used to add cross cutting concerns to requests/commands. For example logging.

    class LoggingBehavior implements PipelineBehavior {
      @override
      Future handle(dynamic request, RequestHandlerDelegate next) async {
        try {
          print('[LoggingBehavior] [${request.runtimeType}] Before');
          return await next();
        } finally {
          print('[LoggingBehavior] [${request.runtimeType}] After');
        }
      }
    }
    
    void main() {
        final mediator = Mediator.create();
    
        // add logging behavior
        mediator.requests.pipeline.registerGeneric(LoggingBehavior());
    }

    Credits

    Visit original content creator repository
  • imgur-download

    Imgur Image Downloader

    Python version License Code Style: Black

    This project is a Python script that enables you to download tagged images from Imgur.

    Download Modes

    This script supports two download modes: sequential and threaded.

    • Sequential: In this mode, images are downloaded one after another using only the main thread of the process.

    • Threaded: This mode creates multiple threads to download images concurrently.

    The script measures and logs the time taken to download the images in both modes. This enables you to see the effect of using the different modes and different numbers of threads on the script’s performance.

    Requirements

    The project’s only dependency is the requests module, which can be installed using pip:

    pip install requests

    The script also requires an Imgur client ID which should be set in your environment variables as imgur_client_id. To obtain an Imgur client ID, create an account on Imgur and follow the instructions on registering an application.

    How to Run

    To run the script, you need to use the command line. Navigate to the directory containing the script, then run the command with the following format:

    python download.py --tag <tag> --mode <mode> [--threads <threads>]

    Where:

    • <script name> is the name of the python file.
    • <tag> is the tag of the images you want to download. For example, astronomy or cats.
    • <mode> is the download mode which can be either threaded or sequential.
    • <threads> (optional) is the number of threads to use in threaded mode. Default is 10. Only valid when --mode=threaded.

    An example command to download images tagged with astronomy using 10 threads is:

    python download.py --tag astronomy --mode threaded --threads 12

    Imgur Tags

    Examples of Imgur tags you can use include: astronomy, cats, cars, nature, earth.

    Notes

    The downloaded images will be saved in an images directory in the same location as the script. Each run will save its images in a new directory with the current time stamp. For tags with multiple images, each image will be saved in a separate directory within this top-level directory.

    Visit original content creator repository
  • sequence_generator

    Sequence Generator

    By Clayton Boneli

    Number or letter sequences are easy to obtain when you are only interested in sequence of numbers (ascending or descending) that follow
    a predefined order, for example decimal numbers 0,1,2,3,4,5,6,7,8,9 always follow that order, the number 1 will follow the number 0, the number 3 will follow the number 2.

    Other numbers of larger quantities also follow the same order of formation, all are composed of digits between 0 and 9. This same feature for sequential letters can be applied to vowels, all of which correspond to characters sequences that follow a predefined order.

    But, what if you need to create a sequence that has a completely different formation order? A string or number that does not follow the rule natural of decimal numbers or the alphabet? For example, if you need to create sequences like the following:

    AA-0001
    AA-0002
    AA-0003
    
    AA-9999
    AB-0001
    AB-0002
    AB-0003
    
    AB-9999
    AC-0001
    AC-0002
    AC-0003
    
    AC-9999
    AD-0001
    AD-0002
    AD-0003
    
    
    Other sequence
    
    A-2019-01
    A-2019-02
    A-2019-03
    
    A-2019-99
    B-2019-01
    B-2019-02
    
    B-2019-99
    C-2019-01
    C-2019-02

    How to create growing sequences but made up of characters made up of letters, numbers, punctuation marks, etc.? For this kind of need it is that the “sequence” package was created which contains classes and the means that allow the definition of a sequence of alphanumeric values and the generation these values in ascending / descending sequential order.

    You can define any sequence of numeric or alphanumeric characters, which can be letters, numbers, decimals, hexadecimals, DNA sequence, etc. Using the sequence generator you can create sequences that will be generated in ascending or descending order.

    You can create your own sequences or use the predefined ones.

    Example:

    seq = factory("WM [0-9][0-9]")
    for x in range(100):
        print(seq.next().get())
        
    seq = factory("WM [0-9]{2}")
    for x in range(100):
        print(seq.next().get())
        
    seq = factory("WM [0-9]{2}", order=[0, 1])
    for x in range(100):
        print(seq.next().get())
        

    Dependencies:

    • exrex

    Visit original content creator repository

  • Audious

    Audious is a virtual assistant for music collections. Amongst other things, it provides the ability to manage albums by indicating the ones which are not in playlists. It also gives detailed statistics about a music collection, such as the number of artists and albums or the overall duration of a music category. It can also be used to sanitize playlists by showing corrupted or missing songs, and can export playlists in different formats, such as FLAC or MP3.

    🎶

    🧠 Broaden your musical horizons

    The more your music collection will grow, the more it will be difficult to remember which albums and songs you liked or listened to. Playlists are there to help us reminding which songs or albums we liked. But there might be albums that you didn’t place in a playlist. Audious will help you managing the albums that are not present in your playlists.

    Display the albums that are not in your playlists, yet!

    🔎 Learn to know your music collection

    Having information about the song currently being played or even the year of the album you want to listen to is easy. However, getting the number of artists and albums, or how finding long it would take you if you wanted to listen to your entire music collection, it’s a different story. Audious will give you statistics about your music collection as well as your playlists.

    Get statistics of your music collection but also of your playlists!

    ❤️ Less is more

    Nowadays, space is cheap. But lossless music is still demanding in size. Having your entire music collection with you all the time on a phone might be impossible. Audious will export all the songs of your playlists and ensure that your favorite songs will always be with you. Export your playlists only to keep your favorite songs!

    🎶

    • Getting started: This section provides everything that is required to install Audious. It also shows how to setup it properly.
      1. Requirements
      2. Installation
      3. Edit the preferences
      4. Launching Audious
    • Tips: Several tips are given in this section to have a better user experience.
    • For Developers and Audiophiles: Audious has been designed as an open source project since day 1. This section clarifies the tool’s internals, explaining how to generate the source code documentation, and how the MP3 conversion is performed during the exportation process.
    • About: The origin of the project and the different licenses are detailed in this section.

    🎶

    Without music, life would be a mistake. — F. Nietzsche

    Getting started

    Requirements

    • A basic knowledge of lossless and lossy audio formats
    • A command line
    • Python 3
    • FFmpeg, which includes the FLAC and LAME packages
    • A music collection with FLAC songs
    • M3U playlists
    • A wish to organize a music collection with playlists

    Installation

    • Install FFmpeg and LAME on the OS:
      • On macOS: brew install ffmpeg lame
      • On Linux (Debian-based): sudo apt install ffmpeg lame
    • Clone this repository: git clone https://github.com/sljrobin/Audious
    • Go to the Audious directory: cd Audious/
    • Create and activate a Python virtual environment:
      • python3 -m venv venv
      • source ./venv/bin/activate
    • Install the requirements with pip3: pip install -r requirements.txt

    Edit the preferences

    • A preferences file, named preferences.json, is available under a preferences/ directory located at root of the repository. It needs to be properly configured before running Audious for the first time.
    • The file is presented as follow:
    {
      "collection": {
        "root": "",
        "playlists": "",
        "music": {
          "artists": "",
          "soundtracks": ""
        }
      },
      "exportation": {
        "root": "",
        "playlists": "",
        "format": ""
      }
    }
    • As it can be seen above, the file contains two main keys, collection and exportation.

    Music collection: collection

    The collection key gives details about the music collection:

    • root is the absolute path of the directory where is located the music collection
    • playlists is the directory containing all the playlists
    • music gives the different categories of the music collection. For instance:
      • artists is the directory that contains all the Artists of the music collection
      • soundtracks on the other hand, contains only soundtracks
      • Other music categories can be added under the music key (e.g. "spoken word": "Spoken Word/")
      • The artists and soundtracks keys are not mandatory, however, at least one key is required
    • Note: all given directories should have an ending / (e.g. Artists/, and not Artists)

    For instance, let’s suppose that a simple music collection is structured as follow:

    Collection/
    ├── Artists/
    ├── Playlists/
    └── Soundtracks/
    

    The collection key in preferences.json should be edited as shown below:

    "collection": {
      "root": "/Users/<username>/Music/Collection/",
      "playlists": "Playlists/",
      "music": {
        "artists": "Artists/",
        "soundtracks": "Soundtracks/"
    }

    Music exportation: exportation

    The exportation key gives details about the playlists exportation:

    • root is the absolute path of the directory where will be located the exported songs and playlists
    • playlists is the directory containing all the exported playlists
    • format is the format song for the playlists exportation; only two options are available: flac and mp3
    • Note: all given directories should have an ending / (e.g. Artists/, and not Artists)

    For instance, let’s suppose that we create an Export/ directory in the Collection/ and we want to export all the songs of the playlists in FLAC; the exportation key in preferences.json should be edited as shown below:

    "exportation": {
      "root": "/Users/<username>/Music/Collection/Export/",
      "playlists": "Playlists/",
      "format": "flac"
    }

    Launching Audious

    • Ensure first the Python virtual environment is enabled by running source ./venv/bin/activate
    • Run Audious: python audious.py --help
    % python audious.py --help
    usage: audious.py [-h] [-e] [-p] [-s]
    
    optional arguments:
      -h, --help    show this help message and exit
      -e, --export  Export the playlists in FLAC or in MP3
      -p, --pick    Pick the albums from the music collection that are not in the playlists
      -s, --stats   Provide statistics of the music collection and the playlists
    
    • Everything is now ready!

    Tips

    Handling long outputs

    Because Audious is capable of parsing big music collections, the generated outputs might be relatively long. As a result, it might be difficult to have a quick glance at the statistics of a category or at the albums that were picked without scrolling.

    An easy way to handle this scrolling issue is to combine Audious with the less command, as shown in the examples below:

    • python audious.py -s | less -R
    • python audious.py -p | less -R

    Press the space bar to switch to the next page on the terminal.

    Hidden files

    Hidden files on Linux or macOS are beginning with a dot (.). For instance, macOS creates lots of these files, called resource forks. As a result, an album with a song called 08 - High Voltage.flac might also contain a hidden file name ._08 - High Voltage.flac.

    Audious is capable of handling these hidden files; it will indicate that it is not a valid file and does not contain any valid metadata. Nevertheless, having these files might generate a lot of noise in Audious outputs with plenty of errors (e.g. The following song could not be parsed and will be ignored: [...]).

    To recursively remove these files and have clean outputs, go to the root of the music collection and use the following commands (Source):

    • To only display hidden files in the music collection:
    find /<path to music collection> -name '._*' -type f
    • To delete hidden files in the music collection:
    find /<path to music collection> -name '._*' -type f -delete

    For Developers and Audiophiles

    Code documentation

    • The source code of Audious has been thoroughly documented in order to help people adding new features or simply improving the code.
    • Because the code is commented, generating a documentation becomes easy.
    • Amongst most popular solutions, we recommend using pydoc for the documentation generation process.
    • Examples:
      • Generate the documentation for the Exporter() class: python -m pydoc lib/exporter.py
      • Generate the documentation for the entire library: python -m pydoc lib/*
      • Follow this tutorial for more information about pydoc
    • Snippet of the documentation for the lib/collection.py class:
    NAME
        collection
    
    CLASSES
        builtins.object
            Collection
    
        class Collection(builtins.object)
         |  Collection(display, preferences)
         |
         |  Methods defined here:
         |
         |  get_category_albums(self, category_songs)
         |      Open a category in the music collection and get a list of all the albums contained in this category.
         |      Handle macOS hidden files. Check the number of albums in the music category as well as in the music collection.
         |      Increment the total.
         |
         |      :param list category_songs: list of songs contained in a music category.
         |      :return list category_albums: list of albums contained in a music category.
         |
         |  get_category_songs(self, category, path)
         |      Open a category in the music collection and get a list of all the songs contained in this category. Select
         |      only .mp3 and .flac files with a regex.
         |
         |      :param str category: the music collection category name.
         |      :param str path: the path where the music collection category is located.
         |      :return list category_songs: list of songs contained in a music category.
    [...]
    

    MP3 conversion

    Ogg vs MP3

    • Ogg format offers a better sound quality compared to the MP3 format. (Source)
    • The MP3 format was chosen as the secondary format in the exportation options. This decision was made to ensure a better compatibility with devices (e.g. vintage audio systems, etc.).

    Command

    • A list of all FFmpeg parameters can be obtained with ffmpeg --help.
    • The MP3 conversion is performed via FFmpeg with the following command:
    ffmpeg -v quiet -y -i <song.flac> -codec:a libmp3lame -qscale:a 0 -map_metadata 0 -id3v2_version 3 <song.mp3>
    • The parameters that were used are detailed below. They were carefully selected by following the FFmpeg MP3 Encoding Guide.
      • -v quiet: does not produce any log on the console
      • -y: overwrites output files
      • -i <song.flac>: gives a song in FLAC as input (note: a full path is required)
      • -codec:a libmp3lame: specifies to use the libmp3lame codec
      • -qscale:a 0: controls quality, 0 being the lower value, it provides the higher quality possible
      • -map_metadata 0: properly maps the FLAC song metadata to the MP3 song metadata (Source)
      • -id3v2_version 3: selects ID3v2.3 for ID3 metadata
      • <song.mp3>: specifies the exported song in MP3 (note: a full path is required)

    MP3 encoding

    • VBR Encoding was preferred to CBR Encoding.
    • -qscale:a 0 is an equivalent of -V 0 and produces an average of 245 kbps for each exported song.
    • More information about the settings is available here.

    Metadata mapping

    About

    Audious

    The name “Audious” was taken from the HBO’s Silicon Valley. In this comedy television series, “Audious” is also a virtual assistant but seems to have more bugs!

    Licenses

    Visit original content creator repository
  • crnn-ctc

    «crnn-ctc» implemented CRNN+CTC

    ONLINE DEMO:LICENSE PLATE RECOGNITION

    Model ARCH Input Shape GFLOPs Model Size (MB) EMNIST Accuracy (%) Training Data Testing Data
    CRNN CONV+GRU (1, 32, 160) 2.2 31 98.570 100,000 5,000
    CRNN_Tiny CONV+GRU (1, 32, 160) 0.1 1.7 98.306 100,000 5,000
    Model ARCH Input Shape GFLOPs Model Size (MB) ChineseLicensePlate Accuracy (%) Training Data Testing Data
    CRNN CONV+GRU (3, 48, 168) 4.0 58 82.147 269,621 149,002
    CRNN_Tiny CONV+GRU (3, 48, 168) 0.3 4.0 76.590 269,621 149,002
    LPRNetPlus CONV (3, 24, 94) 0.5 2.3 63.546 269,621 149,002
    LPRNet CONV (3, 24, 94) 0.3 1.9 60.105 269,621 149,002
    LPRNetPlus+STNet CONV (3, 24, 94) 0.5 2.5 72.130 269,621 149,002
    LPRNet+STNet CONV (3, 24, 94) 0.3 2.2 72.261 269,621 149,002

    For each sub-dataset, the model performance as follows:

    Model CCPD2019-Test Accuracy (%) Testing Data CCPD2020-Test Accuracy (%) Testing Data
    CRNN 81.512 141,982 93.787 5,006
    CRNN_Tiny 75.729 141,982 92.829 5,006
    LPRNetPlus 62.184 141,982 89.373 5,006
    LPRNet 59.597 141,982 89.153 5,006
    LPRNetPlus+STNet 72.125 141,982 90.611 5,006
    LPRNet+STNet 71.291 141,982 89.832 5,006

    If you want to achieve license plate detection, segmentation, and recognition simultaneously, please refer to zjykzj/LPDet.

    Table of Contents

    News🚀🚀🚀

    Version Release Date Major Updates
    v1.3.0 2024/09/21 Add STNet module to LPRNet/LPRNetPlus and update the training/evaluation/prediction results on the CCPD dataset.
    v1.2.0 2024/09/17 Create a new LPRNet/LPRNetPlus model and update the training/evaluation/prediction results on the CCPD dataset.
    v1.1.0 2024/08/17 Update EVAL/PREDICT implementation, support Pytorch format model conversion to ONNX, and finally provide online demo based on Gradio.
    v1.0.0 2024/08/04 Optimize the CRNN architecture while achieving super lightweight CRNN_Tiny.
    In addition, all training scripts support mixed precision training.
    v0.3.0 2024/08/03 Implement models CRNN_LSTM and CRNN_GRU on datasets EMNIST and ChineseLicensePlate.
    v0.2.0 2023/10/11 Support training/evaluation/prediction of CRNN+CTC based on license plate.
    v0.1.0 2023/10/10 Support training/evaluation/prediction of CRNN+CTC based on EMNIST digital characters.

    Background

    This warehouse aims to better understand and apply CRNN+CTC, and has currently achieved digital recognition and license plate recognition. Meanwhile, LPRNet(+STNet) is a pure convolutional architecture for license plate recognition network. I believe that the implementation of these algorithms can help with the deployment of license plate recognition algorithms, such as on edge devices.

    Relevant papers include:

    1. Towards End-to-End License Plate Detection and Recognition: A Large Dataset and Baseline
    2. An End-to-End Trainable Neural Network for Image-based Sequence Recognition and Its Application to Scene Text Recognition
    3. Connectionist Temporal Classification: Labelling Unsegmented Sequence Data with Recurrent Neural Networks
    4. LPRNet: License Plate Recognition via Deep Neural Networks

    Relevant blogs (Chinese):

    1. Towards End-to-End License Plate Detection and Recognition: A Large Dataset and Baseline
    2. An End-to-End Trainable Neural Network for Image-based Sequence Recognition and Its Application to Scene Text Recognition
    3. LPRNet: License Plate Recognition via Deep Neural Networks

    Installation

    $ pip install -r requirements.txt

    Or use docker container

    $ docker run -it --runtime nvidia --gpus=all --shm-size=16g -v /etc/localtime:/etc/localtime -v $(pwd):/workdir --workdir=/workdir --name crnn-ctc ultralytics/yolov5:latest

    Usage

    Train

    # EMNIST
    $ python3 train_emnist.py ../datasets/emnist/ ./runs/crnn-emnist-b512/ --batch-size 512 --device 0 --not-tiny
    # Plate
    $ python3 train_plate.py ../datasets/chinese_license_plate/recog/ ./runs/crnn-plate-b512/ --batch-size 512 --device 0 --not-tiny

    Eval

    # EMNIST
    $ CUDA_VISIBLE_DEVICES=0 python eval_emnist.py crnn-emnist.pth ../datasets/emnist/ --not-tiny
    args: Namespace(not_tiny=True, pretrained='crnn-emnist.pth', use_lstm=False, val_root='../datasets/emnist/')
    Loading CRNN pretrained: crnn-emnist.pth
    crnn-emnist summary: 29 layers, 7924363 parameters, 7924363 gradients, 2.2 GFLOPs
    Batch:49999 ACC:100.000: 100%|████████████████████████████████████████████████████████| 50000/50000 [03:47<00:00, 219.75it/s]
    ACC:98.570
    # Plate
    $ CUDA_VISIBLE_DEVICES=0 python3 eval_plate.py crnn-plate.pth ../datasets/chinese_license_plate/recog/ --not-tiny
    args: Namespace(add_stnet=False, not_tiny=True, only_ccpd2019=False, only_ccpd2020=False, only_others=False, pretrained='crnn-plate.pth', use_lprnet=False, use_lstm=False, use_origin_block=False, val_root='../datasets/chinese_license_plate/recog/')
    Loading CRNN pretrained: crnn-plate.pth
    crnn-plate summary: 29 layers, 15083854 parameters, 15083854 gradients, 4.0 GFLOPs
    Load test data: 149002
    Batch:4656 ACC:100.000: 100%|████████████████████████████████████████████████████████████| 4657/4657 [00:52<00:00, 89.13it/s]
    ACC:82.147

    Predict

    $ CUDA_VISIBLE_DEVICES=0 python predict_emnist.py crnn-emnist.pth ../datasets/emnist/ ./runs/predict/emnist/ --not-tiny
    args: Namespace(not_tiny=True, pretrained='crnn-emnist.pth', save_dir='./runs/predict/emnist/', use_lstm=False, val_root='../datasets/emnist/')
    Loading CRNN pretrained: crnn-emnist.pth
    crnn-emnist summary: 29 layers, 7924363 parameters, 7924363 gradients, 2.2 GFLOPs
    Label: [0 4 2 4 7] Pred: [0 4 2 4 7]
    Label: [2 0 6 5 4] Pred: [2 0 6 5 4]
    Label: [7 3 9 9 5] Pred: [7 3 9 9 5]
    Label: [9 6 6 0 9] Pred: [9 6 6 0 9]
    Label: [2 3 0 7 6] Pred: [2 3 0 7 6]
    Label: [6 5 9 5 2] Pred: [6 5 9 5 2]

    $ CUDA_VISIBLE_DEVICES=0 python predict_plate.py crnn-plate.pth ./assets/plate/宁A87J92_0.jpg runs/predict/plate/ --not-tiny
    args: Namespace(add_stnet=False, image_path='./assets/plate/宁A87J92_0.jpg', not_tiny=True, pretrained='crnn-plate.pth', save_dir='runs/predict/plate/', use_lprnet=False, use_lstm=False, use_origin_block=False)
    Loading CRNN pretrained: crnn-plate.pth
    crnn-plate summary: 29 layers, 15083854 parameters, 15083854 gradients, 4.0 GFLOPs
    Pred: 宁A·87J92 - Predict time: 5.4 ms
    Save to runs/predict/plate/plate_宁A87J92_0.jpg
    $ CUDA_VISIBLE_DEVICES=0 python predict_plate.py crnn-plate.pth ./assets/plate/川A3X7J1_0.jpg runs/predict/plate/ --not-tiny
    args: Namespace(add_stnet=False, image_path='./assets/plate/川A3X7J1_0.jpg', not_tiny=True, pretrained='crnn-plate.pth', save_dir='runs/predict/plate/', use_lprnet=False, use_lstm=False, use_origin_block=False)
    Loading CRNN pretrained: crnn-plate.pth
    crnn-plate summary: 29 layers, 15083854 parameters, 15083854 gradients, 4.0 GFLOPs
    Pred: 川A·3X7J1 - Predict time: 4.7 ms
    Save to runs/predict/plate/plate_川A3X7J1_0.jpg

    Maintainers

    • zhujian – Initial workzjykzj

    Thanks

    Contributing

    Anyone’s participation is welcome! Open an issue or submit PRs.

    Small note:

    License

    Apache License 2.0 © 2023 zjykzj

    Visit original content creator repository