Category: Blog

  • docker-container-manager

    Advanced Docker Container Manager (dcon)

    A comprehensive Docker container management tool with an intuitive interface featuring shell access, log viewing, stats monitoring, port mappings, favorites, and much more.

    Features

    • Interactive Shell Access – Connect to containers with automatic shell detection (bash/sh/zsh/ash)
    • Live Log Viewing – Follow logs with timestamps and configurable tail length
    • Real-time Stats – Monitor CPU, memory, and network usage
    • Container Information – Detailed container inspection and port mappings
    • Favorites System – Mark frequently used containers for quick access
    • Command History – Track your recent container interactions
    • Dynamic Tables – Responsive column widths that adapt to your container names
    • Partial Name Matching – Type partial names to quickly find containers
    • Detailed/Simple Views – Toggle between compact and comprehensive displays
    • Container Management – Restart containers directly from the interface

    Usage Options

    Option 1: Interactive Mode

    List all running containers with an interactive selection menu:

    dcon

    Interactive Commands:

    • [1-9] or [container-name] – Select container
    • d – Toggle advanced view (shows IP, uptime, ports, favorites)
    • f – Show only favorite containers
    • h – View command history
    • q – Quit

    Option 2: Direct Access

    Connect directly to a specific container (supports partial matching):

    dcon web-server
    dcon web      # Matches containers with "web" in the name

    Container Actions

    Once you select a container, you can:

    1. Execute Shell – Interactive bash/sh session
    2. Show Logs (Live)tail -f with timestamps
    3. Show Stats – Real-time CPU/memory monitoring
    4. Container Info – Detailed inspection data
    5. Port Mappings – View all port configurations
    6. Restart Container – Restart the selected container
    7. Manage Favorites – Add/remove from favorites
    8. Show Logs (Static) – View logs without following

    Display Examples

    Simple View:

    +-----+------------------+
    | Nr. | Container Name   |
    +-----+------------------+
    | 1   | web-server       |
    | 2   | database         |
    +-----+------------------+
    
    Quick commands:
      [1-9] | [name]    Select container    |  'd' Toggle advanced view     |  'f' Show favorites
      'h' Show history  |  'q' Quit          |  Partial names supported (e.g., 'web' matches 'web-server')
    

    Advanced View:

    +-----+------------------+-----------------+--------+------------+---+
    | Nr. | Container Name   | IP Address      | Uptime | Ports      | * |
    +-----+------------------+-----------------+--------+------------+---+
    | 1   | web-server       | 172.17.0.2      | 2d     | 80→8080    |   |
    | 2   | database         | 172.17.0.3      | 5h     | N/A        | * |
    +-----+------------------+-----------------+--------+------------+---+
    

    Installation

    Global Installation (Recommended)

    1. Download and install directly:

    curl -JLO https://raw.githubusercontent.com/disisto/docker-container-manager/main/docker-container-manager.sh
    chmod +x docker-container-manager.sh
    sudo mv docker-container-manager.sh /usr/local/bin/dcon
    1. Use from anywhere:

    dcon
    dcon nginx
    dcon web

    Alternative: Shell Alias Method

    1. Place the script in your home directory:
    mv docker-container-manager.sh ~/.docker-container-manager.sh
    1. Add alias to your shell configuration:

    # For bash users
    echo 'alias dcon="$HOME/.docker-container-manager.sh"' >> ~/.bashrc
    source ~/.bashrc
    
    # For zsh users  
    echo 'alias dcon="$HOME/.docker-container-manager.sh"' >> ~/.zshrc
    source ~/.zshrc
    1. Use the command:

    dcon
    dcon web-server

    Installation Example for Debian/Ubuntu

    # Install dependencies
    sudo apt update && sudo apt install curl
    
    # Download and install
    curl -JLO https://raw.githubusercontent.com/disisto/docker-container-manager/main/docker-container-manager.sh
    chmod +x docker-container-manager.sh
    
    # Global installation
    sudo mv docker-container-manager.sh /usr/local/bin/dcon
    
    # Test installation
    dcon --help
    dcon

    Configuration

    The tool automatically creates configuration files in ~/.docker-selector/:

    • favorites – Your favorite containers
    • history – Recent command history
    • config – Tool settings (theme, log lines, etc.)

    Advanced Features

    Favorites Management

    • Add containers to favorites with action menu option 7
    • View only favorites with f command
    • Favorites are marked with * in the table

    Command History

    • All actions are automatically logged with timestamps
    • View recent activity with h command
    • Tracks exec, logs, stats, info, ports, and restart actions

    Dynamic Display

    • Table columns automatically resize based on content
    • Container names up to 50 characters fully displayed
    • IP addresses and ports get optimal column width
    • ASCII-compatible borders work in all terminals

    Flexible Matching

    • Exact name matching: dcon web-server
    • Partial matching: dcon web (finds web-server, web-app, etc.)
    • Multiple partial matches show selection menu
    • Case-sensitive matching for precision

    Requirements

    • Docker – Must be installed and running
    • Bash – Version 4.0+ recommended
    • Terminal – Any standard terminal with ASCII support

    Tips

    • Use d to toggle advanced view for more container information
    • Partial names work great: dcon db instead of dcon production-database-v2
    • Add frequently used containers to favorites for quick access
    • Use h to see what you’ve been working on recently
    • The tool remembers your last view preference (simple/advanced)

    Complete Docker container management in one powerful tool! 🚀

    Visit original content creator repository

  • techiton

    Techiton Game

    A robotic Real-Time Strategy game with mechanical simplicity and annihilation in mind.

    Built with Unity3D and inspired by Total Annihilation (1997).

    Please note: Currently in pre-alpha and early stages of development!

    Overview

    Some basic stuff:

    • The game itself is open source.
    • The website application (https://techiton.net) and game session server however are closed source.
    • Single-player will be free, forever, period.
    • Multiplayer is planned to have a one-time payment of $3-$5 for each account to cover the gameserver costs.
    • Signing in using Steam will create a new account and bind to your Steam ID.
    • No payment is required to make an account.

    Features planned

    Units:

    Accounts:

    • An account will be optional and free for single-player and would only be used for tracking individual statistics and replays.
    • The account profile will be viewable from https://techiton.net/profile/
      • Profiles will be hidden by default.
      • Profiles will show number of overall games, wins, losses, energy generated, metal generated, units created, units destroyed and time played. This will also be viewable in-game, but outside an active game session.
      • The game and website will dynamically handle the maths for the generation and display of averages of the said values.
      • After each game concludes, the game exits to show the post-game numbers and a button linking back to the game lobby.

    Multiplayer:

    • The Steam Platform will without a doubt be seamlessly integrated with the game.
    • The plan for multiplayer is to have a small one-time fee of $3-5 to help in server costs.
    • This also includes replay storage and user account statistics.
    • (this also could help with phony account creation too)

    Game lobbies:

    • For the in-game login, imagine along the lines of the Warcraft III login screen but with a robotic theme.
    • Game lobbies and general chat will be displayed after login. (visualize Diablo II’s lobby)*
    • If an account is not flagged for multiplayer, then it will be rejected and will hyperlink to the game page.
    • Group chats are planned.

    This is basically how I document, design and develop:
    GitKraken is essentially part of my workflow (and pay a subscription for GitKraken Pro).
    GitKraken Glo board Kanban cards.

    Have ideas or want to help? Let me know what you’re interested in working on!
    Email me at nulsoro@gmail.com or message me on discord (euheimr#0950)!
    Or, join the Techiton discord channel here and tag me in a comment with @euheimr.

    Visit original content creator repository

  • HipparchiaServer

    a front end to the database generated by HipparchiaBuilder

    key features:
    	searching
    		search multiple corpora simultaneously
    		build search lists with according to a variety of criteria
    			select passages by hand or via autofill boxes that know the structure of any text at any point
    			search by date range
    			add/exclude individual authors
    			add/exclude individual author genres
    			add/exclude individual works
    			add/exclude individual work genres
    			add/exclude individual passages
    			add/exclude work spans ('books 1-2', e.g.)
    			add/exclude individual author locations
    			add/exclude individual work provenances
    			include/exclude spuria
    			include/exclude undateable works
    			remove items from the list by double-clicking
    			store and load search lists between sessions
    			reset sessions to configurable defaults
    		search syntax
    			search with or without polytonic accents
    				type accented words to make the search sensitive to accents
    				type unaccented words and the search is not sensitive to accents
    			wildcard searching via regular expressions
    			phrase searching: "ÎēÎąĪ„áŊ° Ī„áŊ¸ ĪˆÎŽĪ†ÎšĪ˛ÎŧÎą", etc.
    			proximity searching:
    				within N lines or words
    				not within N lines or words
    			lemmatized searches: look for all known forms of a word
    			lemmatized searches can be combined with non-lemmatized proximity searches
    			phrase searches can be combined with other types: phrase + phrase, phrase + lemmatized or phrase + word
    		results
    			results can be limited to a maximum number of hits
    			results can be limited to one hit per author/work
    			results can be sorted by name, date, etc
    			can set amount of context to accompany results
    	tools
    		browser
    			browse to any passage of your choice
    			browse to any passage that occurs as a search result
    			skim forwards or backwards in the browser
    			click on words to acquire parsing and dictionary info for them
    		dictionaries
    			look up individual words in Greek or Latin
    			customize dictionary output contents
    			get a morphological analysis of a Greek or Latin word
    			get per corpus counts of the use of the word and its derivatives
    			get a weighted chronological distribution of the word's use: mostly 'early', etc.
    			get a weighted distribution by top N genres: show if a word predominantly 'epic', etc.
    			get a summary of uses, senses, phrases, and quotes
    			reverse lookup: 'unexpected' returns áŧ€Î´ÎĩĪ…ÎēÎŽĪ˛, áŧ€Î´ĪŒÎēÎˇĪ„ÎŋΞ, áŧ€Î´ĪŒÎžÎąĪ˛Ī„ÎŋΞ, áŧ€ÎĩÎģĪ€Ī„Î¯Îą, ...
    				by default results return in order of word frequency
    			click to browse to passages cited in the lexical entries ('often' works)
    			click to follow a 'cf.'
    			flip forward/backwards through neighboring entries
    		morphology tables
    			see all extant forms arrayed by dialect, mood, voice, etc.
    			use statistics present next to each form
    				e.g., 2nd sg attic middle future indicatives are...
    				áŧ€Ī€ÎŋÎģÎ­Ī˛ÎˇÎš (4) / áŧ€Ī€ÎŋÎģÎĩáŋ– (181) / áŧ€Ī€ÎŋÎģέÎĩΚ (2) / áŧ€Ī€ÎŋÎģÎ­Ī˛ÎĩΚ (181) / áŧ€Ī€ÎŋÎģÎ­Ī˛áŋƒ (244) / áŧ€Ī€ÎŋÎģáŋ‡ (21)
    			click to execute a follow-up search on any item
    			toggles set the amount of detail to display
    		text maker
    			build a text of a whole work or subsection of a work
    			for example see Xenophon, Hellenica as a whole or just book 3 or just book 3, chapter 4
    			customize text output formatting
    		index maker
    			build an index for a whole author, work or subsection of a work
    			for example see an index to all of Vergil or just the Aeneid or just Book 1 of the Aeneid
    			index by word observed or by dictionary headword (if found...)
    			sort index alphabetically or by number of hits
    			click on index words to get lexical information [excessive index size will disable this]
    			click on index passages to browse to that passage [excessive index size will disable this]
    		semantic vectors
    			calculate the relationship between words on any arbitrary search list using linear algebra
    			various algorithms available:
    				literal proximity
    				word2vec nearest neighbors
    				lsi matrix-similarity
    				lda topic maps
    			configurable settings for key variable like training runs and downsampling
    			trim results by part of speech
    
    	local/contextual information
    		searches give progress updates in percentage complete and time elapsed
    		concordance builds give progress updates in percentage complete and time elapsed
    		search lists can be inspected/hidden before execution
    		local info on current author can be shown/hidden
    		local info on genre lists can be shown/hidden
    		show/hide the settings pane
    		show/hide the complex criteria setter
    		show/hide the complex search dialog boxes
    		hover over interface items to get tooltips
    
    	misc
    		restrict access via user/pass combinations
    		accepts betacode input of greek (with or without accents):
    			"MH=NIN A)/EIDE QEA\"
    			"mh=nin a)/eide qea\"
    			"mhnin aeide qea"
    		search will attempt to choose the most efficient strategy for any given situation
    		text layout in results/browser/text maker sensitive to shifts in font face and size
    		text layout via CSS: possible to modify the style sheet to suit your own tastes
    		optional highlighting of editorial insertions: {abc}, <def>, (ghi), [jkl]
    		configurable defaults for most options
    		configurable UI elements: hide features you will never use
    		will display annotations to the original text
    		unicode support of technical, rare, and exotic characters (that you can also search for: 𐆂,𐄒, 🜚)
    		can find Coptic words and characters: 'â˛Ģⲓⲗⲟⲑⲉⲟâ˛Ĩ', 'ĪŠÎąÎŊĪ­ÎŋΊˈ', etc.
    		forward-compatible unicode: attempt to properly code characters which are not yet available in most fonts
    		known unknowns: unhandled characters preserve their betacode messages in the metadata for future fixes
    		debugging options can be enabled at launch time (see "./run.py -h")
            (optional) threading via helper extension
            (optional) websockets via helper extension
            (optional) semantic vectors via helper extension
    
    

    HipparchiaServer typically runs from the command line within a python virtual environment

    for example:

    % ~/hipparchia_venv/bin/python3 ~/hipparchia_venv/HipparchiaServer/run.py
    

    or, more tersely:

    % run.py
    

    Upon startup you will see something like:

    launch_image

    Note that keyedlemmata can take a while to load. You are ready for business when you see the last line that says Running...:

    * Running on http://127.0.0.1:5000/ (Press CTRL+C to quit)
    

    Then you aim your browser at http://localhost:5000 and you are ready to roll.

    Alternately you can hook HipparchiaServer to something like nginx via uwsgi. That would create a different url

    By default HipparchiaServer will not accept connections that do not originate from the host machine. It would be rather unwise to expose this server to the whole internet. There are many elements to this unwisdom.

    Let us only mention one of them: there are security checks inside Hipparchia, but many queries can be generated that would consume vast computational resources. What would happen if 1000 people tried to do that to your machine at once?

    Of course, most queries take <2s to execute. But servers live in the worst of all possible worlds.

    Instructions on how to use Hipparchia can be found by clicking on the ‘?’ button if you can make it to the front page.

    
    minimum software requirements:
    
    	to launch HipparchiaServer
    		python 3.6+
    			flask
    			psycopg2 or psycopg2-binary
    			websockets
    		postgresql 9.6+
    
    	to run the vectorizing functions
    		python 3.6+
    			cython
    			scipy
    			numpy
    			gensim
    			sklearn
    			pyLDAvis
    			matplotlib
    			networkx
    			umap-learn
    
    	to properly interact with HipparchiaServer via a browser
    		jquery
    		jquery-ui
    		js-cookie
    		a fully stocked unicode font [Arial, DejaVu, Noto, Roboto, ...]
    
    	HipparchiaThirdPartySoftware can provide jquery, etc.
    	HipparchiaExtraFonts can provice Noto, etc.
    	
    	javascript must be enabled in the browser
    	the browser must accept cookies if you wish to save searches
    	
    
    

    See Hipparchia[Platform] for autoinstallers and/or installation recipies for your operating system. Hipparchia can be installed on BSD, Linux, macOS, and Windows. The required fonts and JS libraries are available via HipparchiaThirdPartySoftware (https://github.com/e-gun/HipparchiaThirdPartySoftware)

    Nevertheless, here are the project pages for the other dependencies:

    jquery: http://jquery.com/download/

    jquery-ui: http://jqueryui.com/download/

    js-cookie: https://github.com/js-cookie/js-cookie/releases

    semanticvectors are not installed by default. If you install them, you will also need to edit ./server/settings/semanticvectorsettings.py to enable them: SEMANTICVECTORSENABLED = 'yes'

    Note also that different types of vector search need to be individually enabled within the configuraiton file. The default installation has them all set to no. So you will need to edit at least one of them and set it to yes.

    =====

    What you will see when you point a browser at a HipparchiaServer:

    1. help screen help_screen

    2. interface overview interface_overview

    3. inclusions and exclusions inclusions_and_exclusions

    4. early epic anger less the iliad early_epic_anger_less_the_iliad

    5. by location and date by_location_and_date

    6. browser browser

    7. simple text simple_text

    8. click for dictionary lookup click_for_dictionary_lookup

    9. index to author index_to_author

    10. index by headword index_by_headword

    11. local author info local_author_info

    12. search list preview search_list_preview

    13. proximity searching a proximity searching

    14. proximity searching b proximity_searching

    15. proximity searching c proximity_searching

    16. phrase searching phrase_searching

    17. search by special character search_by_special_character

    18. runs of two letter words runs_of_two_letter_words_in_pl

    19. one hit zero context one_hit_zero_context

    20. lemmatized search lemmatized_search

    21. conceptual neighborhood in an author neighborhood_author

    22. conceptual neighborhood in a corpus / multiple authors neighborhood_corpus

    23. topic models via latent dirichlet allocation neighborhood_corpus

    24. adjust vector settings on the fly neighborhood_corpus

    25. explore morphology morphology table

    26. analogy finder morphology table

    CLI options:

    usage: run.py [-h] [--dbhost DBHOST] [--dbname DBNAME] [--dbport DBPORT] [--debugmessages] [--enabledebugui] [--portoverride PORTOVERRIDE]
                  [--profiling] [--skiplemma] [--disablevectorbot] [--forceuniversalbetacode] [--forcefont FORCEFONT]
                  [--pooledconnection | --simpleconnection] [--threadcount THREADCOUNT] [--purepython] [--forcehelper] [--modulehelper] [--novectors]
                  [--calculatewordweights] [--collapsedgenreweights]
    
    script used to launch HipparchiaServer
    
    optional arguments:
      -h, --help            show this help message and exit
      --dbhost DBHOST       [debugging] override the config file database host address
      --dbname DBNAME       [debugging] override the config file database name
      --dbport DBPORT       [debugging] override the config file database listening port
      --debugmessages       [debugging] show debugging warnings in the console even if CONSOLEWARNINGTYPES is not configured for it
      --enabledebugui       [debugging] forcibly enable the web debug UI
      --portoverride PORTOVERRIDE
                            [debugging] override the config file listening port
      --profiling           [debugging] enable the profiler
      --skiplemma           [debugging] use empty lemmatadict for fast startup (some functions will be lost)
      --disablevectorbot    [force setting] disable the vectorbot for this run
      --forceuniversalbetacode
                            [force setting] all input on the search line will be parsed as betacode
      --forcefont FORCEFONT
                            [force setting] assign a value to DEFAULTLOCALFONT; "MyFont Sans" requires quotation marks to handle the space in the name
      --pooledconnection    [force setting] force a pooled DB connection
      --simpleconnection    [force setting] force a simple DB connection
      --threadcount THREADCOUNT
                            [force setting] override the config file threadcount
      --purepython          [force setting] disallow use of an external go/rust helper; only use internal local python code
      --forcehelper         [force setting] demand use external go/rust helper; avoid use of internal local python code
      --helpername HELPERNAME
                            [force setting] provide the name of a cli binary
    
      --modulehelper        [force setting] call the use external helper as a module instead of a cli binary
      --novectors           [force setting] disable the semantic vector code
      --calculatewordweights
                            [info] generate word weight info
      --collapsedgenreweights
                            [info] generate word weight info & merge related genres ("allret", etc.)```
    
    Visit original content creator repository
  • data-factory-deploy-action

    Azure Data Factory Deploy Action

    GitHub Action that performs a side-effect free deployment of Azure Data Factory entities in a Data Factory instance.

    How it works

    The GitHub Action uses pre and post-deployment scripts to prevent the deployment from potential side effects, such as:

    • Execution of active triggers during the deployment process that could corrupt resources relationships or have pipelines in undesired states.
    • Availability of unused resources that could bring confusion to data engineers and reduce maintainability.

    Architecture Design

    It is designed to run the following steps sequentially:

    1. A pre-deployment task checks for all active triggers and stop them.
    2. An ARM template deployment task is executed.
    3. A post-deployment task deletes all resources that have been removed from the ARM template (triggers, pipelines, dataflows, datasets, linked services, Integration Runtimes) and restarts the active triggers.

    When to use

    The action is useful on Continuous Deployment (CD) scenarios, where a step can be added in a workflow to deploy the Data Factory resources.

    Getting Started

    Prerequisites

    If your GitHub Actions workflows are running on a self-hosted runner, ensure you have installed:

    Example Usage

    steps:
      - name: Login via Az module
        uses: azure/login@v1
        with:
          creds: ${{ secrets.AZURE_CREDENTIALS }}
          enable-AzPSSession: true 
    
      - name: Deploy resources
        uses: Azure/data-factory-deploy-action@v1.2.0
        with:
          resourceGroupName: myResourceGroup
          dataFactoryName: myDataFactory
          armTemplateFile: myArmTemplate.json
          # armTemplateParametersFile: myArmTemplateParameters.json [optional]
          # additionalParameters: 'key1=value key2=value keyN=value' [optional]
          # skipAzModuleInstallation: true [optional]

    Inputs

    Name Description Required Default value
    resourceGroupName Data Factory resource group name true
    dataFactoryName Data Factory name true
    armTemplateFile Data Factory ARM template file false ARMTemplateForFactory.json
    armTemplateParametersFile Data Factory ARM template parameters file false ARMTemplateParametersForFactory.json
    additionalParameters Data Factory custom parameters. Key-values must be splitted by space. false
    skipAzModuleInstallation Skip Az powershell module installation. false false

    Contributing

    This project welcomes contributions and suggestions. Most contributions require you to agree to a
    Contributor License Agreement (CLA) declaring that you have the right to, and actually do, grant us
    the rights to use your contribution. For details, visit https://cla.opensource.microsoft.com.

    When you submit a pull request, a CLA bot will automatically determine whether you need to provide
    a CLA and decorate the PR appropriately (e.g., status check, comment). Simply follow the instructions
    provided by the bot. You will only need to do this once across all repos using our CLA.

    This project has adopted the Microsoft Open Source Code of Conduct.
    For more information see the Code of Conduct FAQ or
    contact opencode@microsoft.com with any additional questions or comments.

    Trademarks

    This project may contain trademarks or logos for projects, products, or services. Authorized use of Microsoft
    trademarks or logos is subject to and must follow
    Microsoft’s Trademark & Brand Guidelines.
    Use of Microsoft trademarks or logos in modified versions of this project must not cause confusion or imply Microsoft sponsorship.
    Any use of third-party trademarks or logos are subject to those third-party’s policies.

    Visit original content creator repository

  • magento2-docker

    Magento 2

    Magento 2 Docker to Development (Apple Silicon)

    Traefik + Nginx + Redis + PHP-FPM + MySQL + XDebug + Mailpit + RabbitMQ + OpenSearch + Varnish

    The docker stack is composed of the following containers

    Name Version
    traefik 3.2
    nginx 1.26
    php-fpm 8.3
    php-fpm-xdebug 3.2.2
    redis 7.2
    mysql 8.0.41
    mailpit 1.24
    rabbitmq 3.13
    opensearch 2.12.0
    opensearch-dashboard 2.12.0
    varnish 7.6

    Container traefik

    Starts a reverse proxy and load balancer for project
    Opens local port: 80, 443

    Container nginx

    Builds from the nginx folder.
    Mounts the folder magento2 from the project main folder into the container volume /home/magento.

    Container php-fpm

    Builds from the php-fpm folder.
    Mounts the folder magento2 from the project main folder into the container volume /home/magento.
    This container includes all dependencies for Magento 2.

    Container php-fpm-xdebug

    Builds from the php-fpm-xdebug folder.
    Mounts the folder magento2 from the project main folder into the container volume /home/magento.
    This container includes all dependencies for Magento 2 (also contain xDebug).

    Container redis:

    Starts a redis container.

    Container mysql:

    Please change or set the mysql environment variables

    MYSQL_DATABASE: 'xxxx'
    MYSQL_ROOT_PASSWORD: 'xxxx'
    MYSQL_USER: 'xxxx'
    MYSQL_PASSWORD: 'xxxx'
    MYSQL_ALLOW_EMPTY_PASSWORD: 'xxxxx'
    

    Default values:

    MYSQL_DATABASE: 'magento_db'
    MYSQL_ROOT_PASSWORD: 'root_pass'
    MYSQL_USER: 'magento_user'
    MYSQL_PASSWORD: 'PASSWD#'
    MYSQL_ALLOW_EMPTY_PASSWORD: 'false'
    

    Opens up port: 3306

    Note: On your host, port 3306 might already be in use. So before running docker-compose.yml, under the docker-compose.yml’s mysql section change the host’s port number to something other than 3306, select any as long as that port is not already being used locally on your machine.

    Container mailpit:

    Starts a mailpit container.
    Opens up port: 8025

    Container rabbitmq:

    Starts a rabbitmq container.
    Opens up port: 15672

    Container opensearch:

    Starts an opensearch container.

    Container opensearch-dashboard:

    Starts an opensearch dashboard container.
    Opens up port: 5601

    Container varnish:

    Builds from the varnish folder.
    Starts a varnish container.
    Opens up port: 6081

    Setup

    Copy your .env.sample to .env file in root folder, and change PROJECT_NAME and PROJECT_VIRTUAL_HOST:
    PROJECT_NAME – help you to create simple and clear container names.
    PROJECT_VIRTUAL_HOST – it is your main url address.

    For example:

    PROJECT_NAME=magento2
    PROJECT_VIRTUAL_HOST=magento2.test
    

    Edit your /etc/hosts and add next line:
    127.0.0.1 magento2.test traefik.magento2.test mail.magento2.test search.magento2.test dashboard.magento2.test rabbit.magento2.test

    To start/build the stack.
    Use – docker-compose up or docker-compose up -d to run the container on detached mode.
    Compose will take some time to execute.
    After the build has finished you can press the ctrl+c and docker-compose stop all containers.

    Installing Magento

    You will check the latest version of Magento from link: https://magento.com/tech-resources/download
    To the run installation process use next commands.
    Create new project:

    ./scripts/composer create-project --repository-url=https://repo.magento.com/ magento/project-community-edition=2.4.7-p4 /home/magento
    

    Install project (don’t forget to change –base-url to yours):

    ./scripts/magento setup:install --base-url=https://magento2.test/ --db-host=mysql --db-name=magento_db --db-user=magento_user --db-password="PASSWD#" --admin-firstname=admin --admin-lastname=admin --admin-email=admin@admin.test --admin-user=admin --admin-password=admin1! --language=en_US --currency=USD --timezone=America/Chicago --use-rewrites=1 --opensearch-host=opensearch --opensearch-port=9200 --search-engine=opensearch
    

    Setting up Magento

    To access the magento homepage, go to the following url: https://magento2.test

    Storing sessions and cache in redis.
    As reference, you could use env.php.magento.sample

    Setting up the configuration for sessions.

       'session' => [
            'save' => 'redis',
            'redis' => [
                'host' => 'redis',
                'port' => '6379',
                'password' => '',
                'timeout' => '2.5',
                'persistent_identifier' => '',
                'database' => '2',
                'compression_threshold' => '2048',
                'compression_library' => 'gzip',
                'log_level' => '1',
                'max_concurrency' => '6',
                'break_after_frontend' => '5',
                'break_after_adminhtml' => '30',
                'first_lifetime' => '600',
                'bot_first_lifetime' => '60',
                'bot_lifetime' => '7200',
                'disable_locking' => '0',
                'min_lifetime' => '60',
                'max_lifetime' => '2592000'
            ]
        ]

    Setting up the configuration for cache.

    'cache' => [
            'frontend' => [
                'default' => [
                    'id_prefix' => '777_',
                    'backend' => 'Cm_Cache_Backend_Redis',
                    'backend_options' => [
                        'server' => 'redis',
                        'database' => '0',
                        'port' => '6379',
                        'compress_data' => '1',
                        'compress_tags' => '1'
                    ]
                ],
                'page_cache' => [
                    'id_prefix' => '777_',
                    'backend' => 'Cm_Cache_Backend_Redis',
                    'backend_options' => [
                        'server' => 'redis',
                        'port' => '6379',
                        'database' => '1',
                        'compress_data' => '0'
                    ]
                ]
            ],
            'allow_parallel_generation' => false
        ],

    Don’t forget to add http_cache_hosts to correct the varnish purge.

    'http_cache_hosts' => [
            [
                'host' => 'nginx',
                'port' => '8080'
            ]
        ]

    How to use xDebug

    You could enable or disable xDebug with the next command: ./scripts/switch_mode [fpm|xdebug]
    fpm – Enable container without xDebug
    xdebug – Enable container with xDebug

    Also, you can open:
    https://traefik.magento2.testTraefik Dashboard (traefik/traefik123 for access)
    https://mail.magento2.testMailpit
    https://search.magento2.testOpenSearch
    https://dashboard.magento2.testOpenSearch Dashboard
    https://rabbit.magento2.testRabbitMQ (guest/guest for access)

    Feature Updates

    • v1.0.0 – Stable release
    • v1.0.1 – Updated to PHP 7.4.x, added docker-sync for macOS users
    • v1.0.2 – Fix xDebug, add rabbitmq management, fix email sending
    • v1.0.3 – Updated to PHP 8.1.x
    • v1.0.4 – Fix xDebug for stable work
    • v1.0.5 – Replace Elasticsearch to OpenSearch, upgrade component versions, added varnish
    • v1.0.6 – Fix xDebug for correct stopping at point
    • v1.0.7 – Add traefik, optimization for varnish, remove nginx-proxy
    • v1.0.8 – Replace mailhog to mailpit
    • v1.0.9 – Add n98-magerun2
    • v1.1.0 – Add a switcher for PHP that enables or disables xDebug
    • v1.1.1 – Fixed to avoid the proxying cycle between varnish and nginx.
    • v1.1.2 – Update image versions for compatibility.

    Branches

    Name Magento versions
    master 2.4.7 and higher
    m246 2.4.6 up to 2.4.7
    m244 2.4.4 up to 2.4.6
    develop like master with untested improvements


    Visit original content creator repository

  • terraform-aws-security-alerts

    terraform-aws-security-alerts

    Build Status Latest Release GitHub tag (latest SemVer) Terraform VersionInfrastructure Tests pre-commit checkov Infrastructure Tests

    This module is to help implement compliance with the CIS benchmarks as tested in the Bridgecrew checks https://docs.bridgecrew.io/docs/monitoring-policies:

    • BC_AWS_MONITORING_1
    • BC_AWS_MONITORING_2
    • BC_AWS_MONITORING_3
    • BC_AWS_MONITORING_4
    • BC_AWS_MONITORING_5
    • BC_AWS_MONITORING_6
    • BC_AWS_MONITORING_7
    • BC_AWS_MONITORING_8
    • BC_AWS_MONITORING_9
    • BC_AWS_MONITORING_10
    • BC_AWS_MONITORING_11
    • BC_AWS_MONITORING_12
    • BC_AWS_MONITORING_13
    • BC_AWS_MONITORING_14

    This module is 100% Open Source and licensed under the APACHE2.

    Introduction

    This module deploys security-alerts for an AWS account. TODO: Update to use lambda rather than SNS Email – https://aws.amazon.com/premiumsupport/knowledge-center/change-sns-email-for-cloudwatch-events/

    Usage

    Include this repository as a module in your existing Terraform code:

    module "security-alerts" {
      source            = "JamesWoolfenden/security-alerts/aws"
      version           = "v0.0.3"
      endpoint          = var.endpoint
    }

    Testing

    aws cloudwatch set-alarm-state --alarm-name "vpc_changes_alarm" --state-reason "Testing the Amazon Cloudwatch alarm" --state-value ALARM --region eu-west-2

    Costs

    Calculated monthly cost estimate
    
    Project: .
    
     Name                                                      Monthly Qty  Unit           Monthly Cost
    
     module.alerts.aws_cloudwatch_metric_alarm.bucket_mod
     └─ Standard resolution                                              1  alarm metrics         $0.10
    
     module.alerts.aws_cloudwatch_metric_alarm.cloudtrail_cfg
     └─ Standard resolution                                              1  alarm metrics         $0.10
    
     module.alerts.aws_cloudwatch_metric_alarm.cmk
     └─ Standard resolution                                              1  alarm metrics         $0.10
    
     module.alerts.aws_cloudwatch_metric_alarm.config_change
     └─ Standard resolution                                              1  alarm metrics         $0.10
    
     module.alerts.aws_cloudwatch_metric_alarm.gateway
     └─ Standard resolution                                              1  alarm metrics         $0.10
    
     module.alerts.aws_cloudwatch_metric_alarm.nacl
     └─ Standard resolution                                              1  alarm metrics         $0.10
    
     module.alerts.aws_cloudwatch_metric_alarm.nomfa
     └─ Standard resolution                                              1  alarm metrics         $0.10
    
     module.alerts.aws_cloudwatch_metric_alarm.policychange
     └─ Standard resolution                                              1  alarm metrics         $0.10
    
     module.alerts.aws_cloudwatch_metric_alarm.root
     └─ Standard resolution                                              1  alarm metrics         $0.10
    
     module.alerts.aws_cloudwatch_metric_alarm.routes
     └─ Standard resolution                                              1  alarm metrics         $0.10
    
     module.alerts.aws_cloudwatch_metric_alarm.sg
     └─ Standard resolution                                              1  alarm metrics         $0.10
    
     module.alerts.aws_cloudwatch_metric_alarm.signfail
     └─ Standard resolution                                              1  alarm metrics         $0.10
    
     module.alerts.aws_cloudwatch_metric_alarm.unauth
     └─ Standard resolution                                              1  alarm metrics         $0.10
    
     module.alerts.aws_cloudwatch_metric_alarm.vpc
     └─ Standard resolution                                              1  alarm metrics         $0.10
    
     module.alerts.aws_sns_topic.trail-unauthorised
     └─ Requests                                                         0  1M requests           $0.00
    
     PROJECT TOTAL                                                                                $1.40
    
    ----------------------------------
    1 resource type wasn't estimated as it's not supported yet.
    1 x aws_sns_topic_subscription
    

    Requirements

    No requirements.

    Providers

    Name Version
    archive n/a
    aws n/a

    Modules

    No modules.

    Resources

    Name Type
    aws_cloudwatch_log_group.processor resource
    aws_cloudwatch_log_metric_filter.bucket_mod resource
    aws_cloudwatch_log_metric_filter.cloudtrail_cfg resource
    aws_cloudwatch_log_metric_filter.cmk resource
    aws_cloudwatch_log_metric_filter.config_change resource
    aws_cloudwatch_log_metric_filter.gateway resource
    aws_cloudwatch_log_metric_filter.nacl resource
    aws_cloudwatch_log_metric_filter.nomfa resource
    aws_cloudwatch_log_metric_filter.policychange resource
    aws_cloudwatch_log_metric_filter.root resource
    aws_cloudwatch_log_metric_filter.routes resource
    aws_cloudwatch_log_metric_filter.sg resource
    aws_cloudwatch_log_metric_filter.signfail resource
    aws_cloudwatch_log_metric_filter.unauth resource
    aws_cloudwatch_log_metric_filter.vpc resource
    aws_cloudwatch_metric_alarm.bucket_mod resource
    aws_cloudwatch_metric_alarm.cloudtrail_cfg resource
    aws_cloudwatch_metric_alarm.cmk resource
    aws_cloudwatch_metric_alarm.config_change resource
    aws_cloudwatch_metric_alarm.gateway resource
    aws_cloudwatch_metric_alarm.nacl resource
    aws_cloudwatch_metric_alarm.nomfa resource
    aws_cloudwatch_metric_alarm.policychange resource
    aws_cloudwatch_metric_alarm.root resource
    aws_cloudwatch_metric_alarm.routes resource
    aws_cloudwatch_metric_alarm.sg resource
    aws_cloudwatch_metric_alarm.signfail resource
    aws_cloudwatch_metric_alarm.unauth resource
    aws_cloudwatch_metric_alarm.vpc resource
    aws_iam_role.SNSFailureFeedback resource
    aws_iam_role.SNSSuccessFeedback resource
    aws_iam_role.lambda-messageprocessor resource
    aws_iam_role_policy.failure resource
    aws_iam_role_policy.lambda resource
    aws_iam_role_policy.success resource
    aws_kms_alias.alarm resource
    aws_kms_key.alarm resource
    aws_lambda_function.email resource
    aws_lambda_permission.with_sns resource
    aws_sns_topic.processed-message resource
    aws_sns_topic.trail-unauthorised resource
    aws_sns_topic_subscription.Emailfromlambda resource
    aws_sns_topic_subscription.lambda resource
    archive_file.notify data source
    aws_caller_identity.current data source

    Inputs

    Name Description Type Default Required
    concurrency n/a number 1 no
    endpoint n/a string n/a yes
    function_name n/a string "message-processor" no
    kms-alias n/a string "alias/alarms" no
    kms_key n/a string "alias/aws/sns" no
    log_group_name n/a string "cloudtrail" no
    protocol n/a string "sms" no

    Outputs

    Name Description
    alarms The alarms created
    lambda The lambda
    metrics The metrics filters for the Alarms
    sns_topic_processed The final SNS endpoint for a processed message
    sns_topic_subscription_lambda The SNS subcription that pulls messages into being processed by the Lambda

    Policy

    This is the policy required to build this project:

    The Terraform resource required is:

    resource "aws_iam_policy" "terraform_pike" {
      name_prefix = "terraform_pike"
      path        = "https://github.com/"
      description = "Pike Autogenerated policy from IAC"
    
      policy = jsonencode({
        "Version": "2012-10-17",
        "Statement": [
            {
                "Sid": "VisualEditor0",
                "Effect": "Allow",
                "Action": [
                    "SNS:CreateTopic",
                    "SNS:DeleteTopic",
                    "SNS:GetTopicAttributes",
                    "SNS:ListTagsForResource",
                    "SNS:SetTopicAttributes"
                ],
                "Resource": "*"
            },
            {
                "Sid": "VisualEditor1",
                "Effect": "Allow",
                "Action": [
                    "cloudwatch:DeleteAlarms",
                    "cloudwatch:DescribeAlarms",
                    "cloudwatch:ListTagsForResource",
                    "cloudwatch:PutMetricAlarm"
                ],
                "Resource": "*"
            },
            {
                "Sid": "VisualEditor2",
                "Effect": "Allow",
                "Action": [
                    "ec2:DescribeAccountAttributes"
                ],
                "Resource": "*"
            },
            {
                "Sid": "VisualEditor3",
                "Effect": "Allow",
                "Action": [
                    "iam:CreateRole",
                    "iam:DeleteRole",
                    "iam:DeleteRolePolicy",
                    "iam:GetRole",
                    "iam:GetRolePolicy",
                    "iam:ListAttachedRolePolicies",
                    "iam:ListInstanceProfilesForRole",
                    "iam:ListRolePolicies",
                    "iam:PassRole",
                    "iam:PutRolePolicy"
                ],
                "Resource": "*"
            },
            {
                "Sid": "VisualEditor4",
                "Effect": "Allow",
                "Action": [
                    "kms:CreateAlias",
                    "kms:CreateKey",
                    "kms:DeleteAlias",
                    "kms:DescribeKey",
                    "kms:EnableKeyRotation",
                    "kms:GetKeyPolicy",
                    "kms:GetKeyRotationStatus",
                    "kms:ListAliases",
                    "kms:ListResourceTags",
                    "kms:PutKeyPolicy",
                    "kms:ScheduleKeyDeletion"
                ],
                "Resource": "*"
            },
            {
                "Sid": "VisualEditor5",
                "Effect": "Allow",
                "Action": [
                    "lambda:AddPermission",
                    "lambda:CreateFunction",
                    "lambda:DeleteFunction",
                    "lambda:GetFunction",
                    "lambda:GetFunctionCodeSigningConfig",
                    "lambda:GetPolicy",
                    "lambda:ListVersionsByFunction",
                    "lambda:RemovePermission"
                ],
                "Resource": "*"
            },
            {
                "Sid": "VisualEditor6",
                "Effect": "Allow",
                "Action": [
                    "logs:CreateLogGroup",
                    "logs:DeleteLogGroup",
                    "logs:DeleteMetricFilter",
                    "logs:DeleteRetentionPolicy",
                    "logs:DescribeLogGroups",
                    "logs:DescribeMetricFilters",
                    "logs:ListTagsLogGroup",
                    "logs:PutMetricFilter",
                    "logs:PutRetentionPolicy"
                ],
                "Resource": "*"
            }
        ]
    })
    }
    

    Related Projects

    Check out these related projects.

    Help

    Got a question?

    File a GitHub issue.

    Contributing

    Bug Reports & Feature Requests

    Please use the issue tracker to report any bugs or file feature requests.

    Copyrights

    Copyright Š 2021-2022 James Woolfenden

    License

    License

    See LICENSE for full details.

    Licensed to the Apache Software Foundation (ASF) under one or more contributor license agreements. See the NOTICE file distributed with this work for additional information regarding copyright ownership. The ASF licenses this file to you under the Apache License, Version 2.0 (the “License”); you may not use this file except in compliance with the License. You may obtain a copy of the License at

    https://www.apache.org/licenses/LICENSE-2.0

    Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on an “AS IS” BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the specific language governing permissions and limitations under the License.

    Contributors

    James Woolfenden
    James Woolfenden

    Visit original content creator repository
  • FluidCollections

    FluidCollections

    All programs manage changing data, but most languages and libraries don’t have a consistent way to model the relationships between and among differing collections of this data. Coordinating updates across an entire program can be a nightmare with standard collections, and the logic is often clouded by loops, locking, synchronization logic, and constant state-checking. And even if one manages to program all of this correctly, the data still must be reorganized into an ObservableCollection or equivalent to display on the UI. Managing data like this is possible with a simple, single threaded program, but throw in some multi-threaded, parallel, and asynchronous logic into your program and this quickly becomes an unworkable nightmare. If any of this sounds familiar (and even if it doesn’t), Fluid Collections are for you!

    In short, Fluid Collections allow you to specify the relationships between your data, and the library makes it happen auto-magically. This is the same idea as the Reactive Extensions for .NET, but applied to collections instead of sequences. If you don’t know much about reactive extensions, don’t worry! It’s definitely possible to use this library without knowing too much about Rx, but a working knowledge certainly helps. This library defines certain reactive collections that are extremely simple, easy to use, and just seem to flow together (and hence the name).

    The ReactiveSet

    The ReactiveSet<T> is the basic unit of FluidCollections, and right now the only reactive collection (A ReactiveDictionary<T1,T2> is in the works). Here is a demo of a basic use case:

    ReactiveSet<int> set1 = new ReactiveSet<int>();
    ReactiveSet<int> set2 = new ReactiveSet<int>();
    
    IReactiveSet<int> union = set1.Union(set2).Where(x => x > 3);
    
    set1.Add(2);
    set1.Add(3);
    
    set2.Add(3);
    set2.Add(4);
    
    Console.WriteLine(union.Contains(2)); // False
    Console.WriteLine(union.Contains(4)); // True

    To start with, we define two reactive sets. These function with the same rules as normal sets (no duplicates, no order), but with a reactive twist. When the union set is defined, some linq-like operations are used to compose the two original sets, and this composition yields a new ReactiveSet. Here’s the reactive part: no matter what happens to the original sets, the new union set will always reflect the result of the union operation, followed by the filter; no syncing required! Notice that the union set updates after it is defined, in stark contrast to normal collections.

    Operations

    FluidCollections was designed with linq and rx in mind, so most of these operations should feel very familiar. Each operation produces a new reactive set that automatically updates itself with the changes that occur in the parent set(s). I haven’t listed all the operations below, so I encourage you to explore the library yourself!

    Set operations

    Reactive sets are… well sets. So naturally, standard set operations are allowed

    set1.Union(set2);
    set1.Intersection(set2);
    set1.Except(set2);
    set1.SymmetricExcept(set2);

    Select/Where

    Just like linq and rx, you can transform and filter reactive sets

    set1.Where(x => x <= 9);
    set1.Select(x => x + 75);

    Aggregate and friends

    Aggregate on reactive sets is similar to that of linq, but with one major exception: The aggregate for reactive sets is a running total, not an instantaneous total. As such, it returns an IObservable of the result. Aggregate essentially converts the changes from a reactive set into a running total by using the supplied add function or remove function, depending on the type of update. After updating the total, the new result is pushed to the IObservable

    // ReactiveSet<T>.Aggregate(seed, addFunction, removeFunction)
    IObservable<int> sum = set1.Aggregate(0, (total, item) => total + item, (total, item) => total - item);
    
    // Implemented with aggregate
    IObservable<int> realSum = set1.Sum();
    IObservable<int> product = set1.Product();
    IObservable<int> count = set1.Count();
    IObservable<IImmutableSet<int>> sets = set1.ToImmutableSets();

    ToReactiveSet

    After a chain of other non-buffering operations, use ToReactiveSet() to buffer the result into an internal collection that can be traversed. (See variants for more info)

    // Set3 is non-buffering. No count property. Also can't enumerate the elements
    IReactiveSet<int> set3 = set1.Intersection(set2).Where(x => x > 4);
    
    // This stores all the resulting elements in an internal set, and so is a bit more concrete
    ICollectedReactiveSet<int> bufferedSet3 = set3.ToReactiveSet();
    
    // Instantaneous count property
    Console.WriteLine(bufferedSet3.Count);
    
    // Can traverse the elements currently in the set
    // Since the set is reactive, both the elements and the count can unpredictably change
    foreach (int num in bufferedSet3.AsEnumerable()) {
        Console.WriteLine(num);
    }

    OrderBy

    OrderBy buffers a reactive set into an ordered reactive set (see below), using either the default comparer or a provided property to sort on.

    IOrderedReactiveSet<int> set4 = set1.OrderBy(x => x /* Normally a useful property */);
    
    Console.WriteLine(set4.Min);
    Console.WriteLine(set4.Max);
    
    // Indexing works!
    Console.WriteLine(set4.IndexOf(4));
    Console.WriteLine(set4[2]);

    ReactiveSet Variants

    • ReactiveSet<T> – The most basic reactive set, is used for the “source” for more dependent reactive sets. Is mutable, and supports additions and removals

    • IReactiveSet<T> – This represents a non-buffered reactive set, meaning that the elements are not stored in an underlying collection. The benefit is that it uses almost no additional memory, making it very efficient. Think of it as the IEnumerable of FluidCollections. Most extension methods return an IReactiveSet, and it is not directly modifiable. Despite not buffering elements, there is a contains method.

    • ICollectedReactiveSet<T> – Think of this as the ICollection of FluidCollections. This is the same as a normal reactive set, but there is a count property as well as an AsEnumerable() method that lets you traverse the elements directly. It is buffered and stores a copy of the elements in its own internal set. This is usually the end result of a chain of operations performed on another reactive set so you can actually use the set with other code.

    • IOrderedReactiveSet<T> – This type of reactive set is both sorted and indexed, and provides appropriate members accordingly. All operations are performed in O(log n) time, including indexing. Because of this, an ordered reactive set implements INotifyCollectionChanged, making it perfect for UI binding! (for those interested, this is implemented with a custom weight-balanced order statistic tree).

    Contributing

    This project is just in its infancy, and I’m not attached to particular API’s, classes, or methods. Right now, I simply want to make the library as good as it can get, and backwards compatibly can come later. So I encourage you, contribute! There is so much that can be done with this library that I won’t have time to implement, and I’d like to hear any suggestions for improvements you have. Going forward, I’d like to build this project into a one-stop-shop for reactive collections, while at the same time keeping it:

    • Easy to use. Knowing linq should be enough to “dot in” and figure out how to use nearly all of the operations.
    • Performant, but not at the expense of safety. Speed is always a bonus, but I also want the operations to have a certain plug-and-play feel that comes with linq.
    • Intuitive. The name of the project is “fluid” after all, and the updates should Just Happenâ„ĸ without knowledge of the inner workings.

    I’ve tried to achieve these goals with the current version, and if anybody out there want to help build this library, send over a pull request! (I won’t bite)

    Acknowledgments

    Credit where credit is due, and I’m not afraid to admit that I never would have created this project were it not for DynamicData. DynamicData really is a great piece of engineering. When I first discovered it, I was hooked and tried to convert some of my personal project to observable lists and caches. However, upon doing so I realized that indexed lists were not always ideal for my situations, and caches sometimes were cumbersome. I began brainstorming, and soon realized that unordered sets were ideal for update notifications, and that reordering can be accomplished with trees. Implementing my set idea became a challenge, and this project was the result. So to DyamicData I say thank you for making me see collections in an entirely different light.

    Visit original content creator repository

  • princess

                      GNU LESSER GENERAL PUBLIC LICENSE
                           Version 2.1, February 1999
    
     Copyright (C) 1991, 1999 Free Software Foundation, Inc.
     51 Franklin Street, Fifth Floor, Boston, MA  02110-1301  USA
     Everyone is permitted to copy and distribute verbatim copies
     of this license document, but changing it is not allowed.
    
    [This is the first released version of the Lesser GPL.  It also counts
     as the successor of the GNU Library Public License, version 2, hence
     the version number 2.1.]
    
                                Preamble
    
      The licenses for most software are designed to take away your
    freedom to share and change it.  By contrast, the GNU General Public
    Licenses are intended to guarantee your freedom to share and change
    free software--to make sure the software is free for all its users.
    
      This license, the Lesser General Public License, applies to some
    specially designated software packages--typically libraries--of the
    Free Software Foundation and other authors who decide to use it.  You
    can use it too, but we suggest you first think carefully about whether
    this license or the ordinary General Public License is the better
    strategy to use in any particular case, based on the explanations below.
    
      When we speak of free software, we are referring to freedom of use,
    not price.  Our General Public Licenses are designed to make sure that
    you have the freedom to distribute copies of free software (and charge
    for this service if you wish); that you receive source code or can get
    it if you want it; that you can change the software and use pieces of
    it in new free programs; and that you are informed that you can do
    these things.
    
      To protect your rights, we need to make restrictions that forbid
    distributors to deny you these rights or to ask you to surrender these
    rights.  These restrictions translate to certain responsibilities for
    you if you distribute copies of the library or if you modify it.
    
      For example, if you distribute copies of the library, whether gratis
    or for a fee, you must give the recipients all the rights that we gave
    you.  You must make sure that they, too, receive or can get the source
    code.  If you link other code with the library, you must provide
    complete object files to the recipients, so that they can relink them
    with the library after making changes to the library and recompiling
    it.  And you must show them these terms so they know their rights.
    
      We protect your rights with a two-step method: (1) we copyright the
    library, and (2) we offer you this license, which gives you legal
    permission to copy, distribute and/or modify the library.
    
      To protect each distributor, we want to make it very clear that
    there is no warranty for the free library.  Also, if the library is
    modified by someone else and passed on, the recipients should know
    that what they have is not the original version, so that the original
    author's reputation will not be affected by problems that might be
    introduced by others.
    
      Finally, software patents pose a constant threat to the existence of
    any free program.  We wish to make sure that a company cannot
    effectively restrict the users of a free program by obtaining a
    restrictive license from a patent holder.  Therefore, we insist that
    any patent license obtained for a version of the library must be
    consistent with the full freedom of use specified in this license.
    
      Most GNU software, including some libraries, is covered by the
    ordinary GNU General Public License.  This license, the GNU Lesser
    General Public License, applies to certain designated libraries, and
    is quite different from the ordinary General Public License.  We use
    this license for certain libraries in order to permit linking those
    libraries into non-free programs.
    
      When a program is linked with a library, whether statically or using
    a shared library, the combination of the two is legally speaking a
    combined work, a derivative of the original library.  The ordinary
    General Public License therefore permits such linking only if the
    entire combination fits its criteria of freedom.  The Lesser General
    Public License permits more lax criteria for linking other code with
    the library.
    
      We call this license the "Lesser" General Public License because it
    does Less to protect the user's freedom than the ordinary General
    Public License.  It also provides other free software developers Less
    of an advantage over competing non-free programs.  These disadvantages
    are the reason we use the ordinary General Public License for many
    libraries.  However, the Lesser license provides advantages in certain
    special circumstances.
    
      For example, on rare occasions, there may be a special need to
    encourage the widest possible use of a certain library, so that it becomes
    a de-facto standard.  To achieve this, non-free programs must be
    allowed to use the library.  A more frequent case is that a free
    library does the same job as widely used non-free libraries.  In this
    case, there is little to gain by limiting the free library to free
    software only, so we use the Lesser General Public License.
    
      In other cases, permission to use a particular library in non-free
    programs enables a greater number of people to use a large body of
    free software.  For example, permission to use the GNU C Library in
    non-free programs enables many more people to use the whole GNU
    operating system, as well as its variant, the GNU/Linux operating
    system.
    
      Although the Lesser General Public License is Less protective of the
    users' freedom, it does ensure that the user of a program that is
    linked with the Library has the freedom and the wherewithal to run
    that program using a modified version of the Library.
    
      The precise terms and conditions for copying, distribution and
    modification follow.  Pay close attention to the difference between a
    "work based on the library" and a "work that uses the library".  The
    former contains code derived from the library, whereas the latter must
    be combined with the library in order to run.
    
                      GNU LESSER GENERAL PUBLIC LICENSE
       TERMS AND CONDITIONS FOR COPYING, DISTRIBUTION AND MODIFICATION
    
      0. This License Agreement applies to any software library or other
    program which contains a notice placed by the copyright holder or
    other authorized party saying it may be distributed under the terms of
    this Lesser General Public License (also called "this License").
    Each licensee is addressed as "you".
    
      A "library" means a collection of software functions and/or data
    prepared so as to be conveniently linked with application programs
    (which use some of those functions and data) to form executables.
    
      The "Library", below, refers to any such software library or work
    which has been distributed under these terms.  A "work based on the
    Library" means either the Library or any derivative work under
    copyright law: that is to say, a work containing the Library or a
    portion of it, either verbatim or with modifications and/or translated
    straightforwardly into another language.  (Hereinafter, translation is
    included without limitation in the term "modification".)
    
      "Source code" for a work means the preferred form of the work for
    making modifications to it.  For a library, complete source code means
    all the source code for all modules it contains, plus any associated
    interface definition files, plus the scripts used to control compilation
    and installation of the library.
    
      Activities other than copying, distribution and modification are not
    covered by this License; they are outside its scope.  The act of
    running a program using the Library is not restricted, and output from
    such a program is covered only if its contents constitute a work based
    on the Library (independent of the use of the Library in a tool for
    writing it).  Whether that is true depends on what the Library does
    and what the program that uses the Library does.
    
      1. You may copy and distribute verbatim copies of the Library's
    complete source code as you receive it, in any medium, provided that
    you conspicuously and appropriately publish on each copy an
    appropriate copyright notice and disclaimer of warranty; keep intact
    all the notices that refer to this License and to the absence of any
    warranty; and distribute a copy of this License along with the
    Library.
    
      You may charge a fee for the physical act of transferring a copy,
    and you may at your option offer warranty protection in exchange for a
    fee.
    
      2. You may modify your copy or copies of the Library or any portion
    of it, thus forming a work based on the Library, and copy and
    distribute such modifications or work under the terms of Section 1
    above, provided that you also meet all of these conditions:
    
        a) The modified work must itself be a software library.
    
        b) You must cause the files modified to carry prominent notices
        stating that you changed the files and the date of any change.
    
        c) You must cause the whole of the work to be licensed at no
        charge to all third parties under the terms of this License.
    
        d) If a facility in the modified Library refers to a function or a
        table of data to be supplied by an application program that uses
        the facility, other than as an argument passed when the facility
        is invoked, then you must make a good faith effort to ensure that,
        in the event an application does not supply such function or
        table, the facility still operates, and performs whatever part of
        its purpose remains meaningful.
    
        (For example, a function in a library to compute square roots has
        a purpose that is entirely well-defined independent of the
        application.  Therefore, Subsection 2d requires that any
        application-supplied function or table used by this function must
        be optional: if the application does not supply it, the square
        root function must still compute square roots.)
    
    These requirements apply to the modified work as a whole.  If
    identifiable sections of that work are not derived from the Library,
    and can be reasonably considered independent and separate works in
    themselves, then this License, and its terms, do not apply to those
    sections when you distribute them as separate works.  But when you
    distribute the same sections as part of a whole which is a work based
    on the Library, the distribution of the whole must be on the terms of
    this License, whose permissions for other licensees extend to the
    entire whole, and thus to each and every part regardless of who wrote
    it.
    
    Thus, it is not the intent of this section to claim rights or contest
    your rights to work written entirely by you; rather, the intent is to
    exercise the right to control the distribution of derivative or
    collective works based on the Library.
    
    In addition, mere aggregation of another work not based on the Library
    with the Library (or with a work based on the Library) on a volume of
    a storage or distribution medium does not bring the other work under
    the scope of this License.
    
      3. You may opt to apply the terms of the ordinary GNU General Public
    License instead of this License to a given copy of the Library.  To do
    this, you must alter all the notices that refer to this License, so
    that they refer to the ordinary GNU General Public License, version 2,
    instead of to this License.  (If a newer version than version 2 of the
    ordinary GNU General Public License has appeared, then you can specify
    that version instead if you wish.)  Do not make any other change in
    these notices.
    
      Once this change is made in a given copy, it is irreversible for
    that copy, so the ordinary GNU General Public License applies to all
    subsequent copies and derivative works made from that copy.
    
      This option is useful when you wish to copy part of the code of
    the Library into a program that is not a library.
    
      4. You may copy and distribute the Library (or a portion or
    derivative of it, under Section 2) in object code or executable form
    under the terms of Sections 1 and 2 above provided that you accompany
    it with the complete corresponding machine-readable source code, which
    must be distributed under the terms of Sections 1 and 2 above on a
    medium customarily used for software interchange.
    
      If distribution of object code is made by offering access to copy
    from a designated place, then offering equivalent access to copy the
    source code from the same place satisfies the requirement to
    distribute the source code, even though third parties are not
    compelled to copy the source along with the object code.
    
      5. A program that contains no derivative of any portion of the
    Library, but is designed to work with the Library by being compiled or
    linked with it, is called a "work that uses the Library".  Such a
    work, in isolation, is not a derivative work of the Library, and
    therefore falls outside the scope of this License.
    
      However, linking a "work that uses the Library" with the Library
    creates an executable that is a derivative of the Library (because it
    contains portions of the Library), rather than a "work that uses the
    library".  The executable is therefore covered by this License.
    Section 6 states terms for distribution of such executables.
    
      When a "work that uses the Library" uses material from a header file
    that is part of the Library, the object code for the work may be a
    derivative work of the Library even though the source code is not.
    Whether this is true is especially significant if the work can be
    linked without the Library, or if the work is itself a library.  The
    threshold for this to be true is not precisely defined by law.
    
      If such an object file uses only numerical parameters, data
    structure layouts and accessors, and small macros and small inline
    functions (ten lines or less in length), then the use of the object
    file is unrestricted, regardless of whether it is legally a derivative
    work.  (Executables containing this object code plus portions of the
    Library will still fall under Section 6.)
    
      Otherwise, if the work is a derivative of the Library, you may
    distribute the object code for the work under the terms of Section 6.
    Any executables containing that work also fall under Section 6,
    whether or not they are linked directly with the Library itself.
    
      6. As an exception to the Sections above, you may also combine or
    link a "work that uses the Library" with the Library to produce a
    work containing portions of the Library, and distribute that work
    under terms of your choice, provided that the terms permit
    modification of the work for the customer's own use and reverse
    engineering for debugging such modifications.
    
      You must give prominent notice with each copy of the work that the
    Library is used in it and that the Library and its use are covered by
    this License.  You must supply a copy of this License.  If the work
    during execution displays copyright notices, you must include the
    copyright notice for the Library among them, as well as a reference
    directing the user to the copy of this License.  Also, you must do one
    of these things:
    
        a) Accompany the work with the complete corresponding
        machine-readable source code for the Library including whatever
        changes were used in the work (which must be distributed under
        Sections 1 and 2 above); and, if the work is an executable linked
        with the Library, with the complete machine-readable "work that
        uses the Library", as object code and/or source code, so that the
        user can modify the Library and then relink to produce a modified
        executable containing the modified Library.  (It is understood
        that the user who changes the contents of definitions files in the
        Library will not necessarily be able to recompile the application
        to use the modified definitions.)
    
        b) Use a suitable shared library mechanism for linking with the
        Library.  A suitable mechanism is one that (1) uses at run time a
        copy of the library already present on the user's computer system,
        rather than copying library functions into the executable, and (2)
        will operate properly with a modified version of the library, if
        the user installs one, as long as the modified version is
        interface-compatible with the version that the work was made with.
    
        c) Accompany the work with a written offer, valid for at
        least three years, to give the same user the materials
        specified in Subsection 6a, above, for a charge no more
        than the cost of performing this distribution.
    
        d) If distribution of the work is made by offering access to copy
        from a designated place, offer equivalent access to copy the above
        specified materials from the same place.
    
        e) Verify that the user has already received a copy of these
        materials or that you have already sent this user a copy.
    
      For an executable, the required form of the "work that uses the
    Library" must include any data and utility programs needed for
    reproducing the executable from it.  However, as a special exception,
    the materials to be distributed need not include anything that is
    normally distributed (in either source or binary form) with the major
    components (compiler, kernel, and so on) of the operating system on
    which the executable runs, unless that component itself accompanies
    the executable.
    
      It may happen that this requirement contradicts the license
    restrictions of other proprietary libraries that do not normally
    accompany the operating system.  Such a contradiction means you cannot
    use both them and the Library together in an executable that you
    distribute.
    
      7. You may place library facilities that are a work based on the
    Library side-by-side in a single library together with other library
    facilities not covered by this License, and distribute such a combined
    library, provided that the separate distribution of the work based on
    the Library and of the other library facilities is otherwise
    permitted, and provided that you do these two things:
    
        a) Accompany the combined library with a copy of the same work
        based on the Library, uncombined with any other library
        facilities.  This must be distributed under the terms of the
        Sections above.
    
        b) Give prominent notice with the combined library of the fact
        that part of it is a work based on the Library, and explaining
        where to find the accompanying uncombined form of the same work.
    
      8. You may not copy, modify, sublicense, link with, or distribute
    the Library except as expressly provided under this License.  Any
    attempt otherwise to copy, modify, sublicense, link with, or
    distribute the Library is void, and will automatically terminate your
    rights under this License.  However, parties who have received copies,
    or rights, from you under this License will not have their licenses
    terminated so long as such parties remain in full compliance.
    
      9. You are not required to accept this License, since you have not
    signed it.  However, nothing else grants you permission to modify or
    distribute the Library or its derivative works.  These actions are
    prohibited by law if you do not accept this License.  Therefore, by
    modifying or distributing the Library (or any work based on the
    Library), you indicate your acceptance of this License to do so, and
    all its terms and conditions for copying, distributing or modifying
    the Library or works based on it.
    
      10. Each time you redistribute the Library (or any work based on the
    Library), the recipient automatically receives a license from the
    original licensor to copy, distribute, link with or modify the Library
    subject to these terms and conditions.  You may not impose any further
    restrictions on the recipients' exercise of the rights granted herein.
    You are not responsible for enforcing compliance by third parties with
    this License.
    
      11. If, as a consequence of a court judgment or allegation of patent
    infringement or for any other reason (not limited to patent issues),
    conditions are imposed on you (whether by court order, agreement or
    otherwise) that contradict the conditions of this License, they do not
    excuse you from the conditions of this License.  If you cannot
    distribute so as to satisfy simultaneously your obligations under this
    License and any other pertinent obligations, then as a consequence you
    may not distribute the Library at all.  For example, if a patent
    license would not permit royalty-free redistribution of the Library by
    all those who receive copies directly or indirectly through you, then
    the only way you could satisfy both it and this License would be to
    refrain entirely from distribution of the Library.
    
    If any portion of this section is held invalid or unenforceable under any
    particular circumstance, the balance of the section is intended to apply,
    and the section as a whole is intended to apply in other circumstances.
    
    It is not the purpose of this section to induce you to infringe any
    patents or other property right claims or to contest validity of any
    such claims; this section has the sole purpose of protecting the
    integrity of the free software distribution system which is
    implemented by public license practices.  Many people have made
    generous contributions to the wide range of software distributed
    through that system in reliance on consistent application of that
    system; it is up to the author/donor to decide if he or she is willing
    to distribute software through any other system and a licensee cannot
    impose that choice.
    
    This section is intended to make thoroughly clear what is believed to
    be a consequence of the rest of this License.
    
      12. If the distribution and/or use of the Library is restricted in
    certain countries either by patents or by copyrighted interfaces, the
    original copyright holder who places the Library under this License may add
    an explicit geographical distribution limitation excluding those countries,
    so that distribution is permitted only in or among countries not thus
    excluded.  In such case, this License incorporates the limitation as if
    written in the body of this License.
    
      13. The Free Software Foundation may publish revised and/or new
    versions of the Lesser General Public License from time to time.
    Such new versions will be similar in spirit to the present version,
    but may differ in detail to address new problems or concerns.
    
    Each version is given a distinguishing version number.  If the Library
    specifies a version number of this License which applies to it and
    "any later version", you have the option of following the terms and
    conditions either of that version or of any later version published by
    the Free Software Foundation.  If the Library does not specify a
    license version number, you may choose any version ever published by
    the Free Software Foundation.
    
      14. If you wish to incorporate parts of the Library into other free
    programs whose distribution conditions are incompatible with these,
    write to the author to ask for permission.  For software which is
    copyrighted by the Free Software Foundation, write to the Free
    Software Foundation; we sometimes make exceptions for this.  Our
    decision will be guided by the two goals of preserving the free status
    of all derivatives of our free software and of promoting the sharing
    and reuse of software generally.
    
                                NO WARRANTY
    
      15. BECAUSE THE LIBRARY IS LICENSED FREE OF CHARGE, THERE IS NO
    WARRANTY FOR THE LIBRARY, TO THE EXTENT PERMITTED BY APPLICABLE LAW.
    EXCEPT WHEN OTHERWISE STATED IN WRITING THE COPYRIGHT HOLDERS AND/OR
    OTHER PARTIES PROVIDE THE LIBRARY "AS IS" WITHOUT WARRANTY OF ANY
    KIND, EITHER EXPRESSED OR IMPLIED, INCLUDING, BUT NOT LIMITED TO, THE
    IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR
    PURPOSE.  THE ENTIRE RISK AS TO THE QUALITY AND PERFORMANCE OF THE
    LIBRARY IS WITH YOU.  SHOULD THE LIBRARY PROVE DEFECTIVE, YOU ASSUME
    THE COST OF ALL NECESSARY SERVICING, REPAIR OR CORRECTION.
    
      16. IN NO EVENT UNLESS REQUIRED BY APPLICABLE LAW OR AGREED TO IN
    WRITING WILL ANY COPYRIGHT HOLDER, OR ANY OTHER PARTY WHO MAY MODIFY
    AND/OR REDISTRIBUTE THE LIBRARY AS PERMITTED ABOVE, BE LIABLE TO YOU
    FOR DAMAGES, INCLUDING ANY GENERAL, SPECIAL, INCIDENTAL OR
    CONSEQUENTIAL DAMAGES ARISING OUT OF THE USE OR INABILITY TO USE THE
    LIBRARY (INCLUDING BUT NOT LIMITED TO LOSS OF DATA OR DATA BEING
    RENDERED INACCURATE OR LOSSES SUSTAINED BY YOU OR THIRD PARTIES OR A
    FAILURE OF THE LIBRARY TO OPERATE WITH ANY OTHER SOFTWARE), EVEN IF
    SUCH HOLDER OR OTHER PARTY HAS BEEN ADVISED OF THE POSSIBILITY OF SUCH
    DAMAGES.
    
                         END OF TERMS AND CONDITIONS
    
               How to Apply These Terms to Your New Libraries
    
      If you develop a new library, and you want it to be of the greatest
    possible use to the public, we recommend making it free software that
    everyone can redistribute and change.  You can do so by permitting
    redistribution under these terms (or, alternatively, under the terms of the
    ordinary General Public License).
    
      To apply these terms, attach the following notices to the library.  It is
    safest to attach them to the start of each source file to most effectively
    convey the exclusion of warranty; and each file should have at least the
    "copyright" line and a pointer to where the full notice is found.
    
        <one line to give the library's name and a brief idea of what it does.>
        Copyright (C) <year>  <name of author>
    
        This library is free software; you can redistribute it and/or
        modify it under the terms of the GNU Lesser General Public
        License as published by the Free Software Foundation; either
        version 2.1 of the License, or (at your option) any later version.
    
        This library is distributed in the hope that it will be useful,
        but WITHOUT ANY WARRANTY; without even the implied warranty of
        MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.  See the GNU
        Lesser General Public License for more details.
    
        You should have received a copy of the GNU Lesser General Public
        License along with this library; if not, write to the Free Software
        Foundation, Inc., 51 Franklin Street, Fifth Floor, Boston, MA  02110-1301  USA
    
    Also add information on how to contact you by electronic and paper mail.
    
    You should also get your employer (if you work as a programmer) or your
    school, if any, to sign a "copyright disclaimer" for the library, if
    necessary.  Here is a sample; alter the names:
    
      Yoyodyne, Inc., hereby disclaims all copyright interest in the
      library `Frob' (a library for tweaking knobs) written by James Random Hacker.
    
      <signature of Ty Coon>, 1 April 1990
      Ty Coon, President of Vice
    
    That's all there is to it!
    

    Visit original content creator repository

  • robotic-inbox

    Robotic Inbox Status: 💟 End of Life

    🚀 Automatic Release ✅ Dedicated Servers Supported ServerSide ✅ Single Player and P2P Supported

    Table of Contents

    Summary

    A special container that automatically sorts and distributes items to other nearby storage containers.

    💟 This mod has reached End of Life and will not be directly updated to support 7 Days to Die 2.0 or beyond. Because this mod is MIT-Licensed and open-source, it is possible that other modders will keep this concept going in the future.

    Searching NexusMods or 7 Days to Die Mods may lead to discovering other mods either built on top of or inspired by this mod.

    robotic inbox, standard color

    Support

    💟 This mod has reached its end of life and is no longer supported or maintained by Kanaverum (Jonathan Robertson // me). I am instead focused on my own game studio (Calculating Chaos, if curious).

    â¤ī¸ All of my public mods have always been open-source and are MIT-Licensed; please feel free to take some or all of the code to reuse, modify, redistribute, and even rebrand however you like! The code in this project isn’t perfect; as you update, add features, fix bugs, and otherwise improve upon my ideas, please make sure to give yourself credit for the work you do and publish your new version of the mod under your own name 😄 🎉

    Features

    Automatic Item Distribution and Organization

    This container will automatically distribute resources placed within it if they are present in other nearby containers. Resources can be distributed to any container within 5 meters by default (horizontally and vertically), so long as the following conditions are met:

    1. If inbox is locked, target must be locked, and share same password.
    2. If inbox is unlocked, target must also be unlocked.
    3. If inbox is within an LCB, target must also be within that same LCB.
    4. Backpack, vehicle, and storage not placed by a player are ignored.

    Press & hold Action Key to lock it or set a combination.

    This explanation is included in-game as the Robotic Inbox Block Description.

    Dynamic Hints

    âœī¸ While any secure or insecure player-placed storage container can be targeted by the Inbox, Writable Storage Containers will describe how the Inbox is interacting with them, making them the recommended type of container to place near an Inbox.

    robotic inbox being repaired

    Repairable Locks (new to v4)

    If someone busts your lock, you can replace the lock simply by repairing it. This will go through the upgrade flow and should appear relatively seamless.

    Or if you break the lock on someone else’s Robotic Inbox (such as a friend no longer logs in), breaking the lock and repairing it will allow you to take ownership of the Inbox and adjust its password, lock state, etc.

    âš ī¸ Robotic Inboxes with broken locks will not be able to distribute items again until they’re repaired.

    robotic inbox being repaired

    Multiple Colors (new to v4)

    robotic inboxes with colors (unlit)

    unlit in daylight

    robotic inboxes with colors (lit)

    lit in daylight with a headlamp

    Configuration Options (new to v4)

    You now have a slew of options you can use to fine-tune the experience for yourself and any other players who happen to join your game!

    Command Default Constraints Description Impact
    help roboticinbox N/A N/A Receive help information about the set of commands this mod provides N/A
    ri horizontal-range <int> 5 0 to 128 set how wide (x/z axes) the inbox should scan for storage containers very high
    ri vertical-range <int> 5 -1 to 253 (-1 = bedrock-to-sky) set how high/low (y axis) the inbox should scan for storage containers high
    ri success-notice-time <float> 2.0 0.0 to 10.0 set how long to leave distribution success notice on boxes N/A
    ri blocked-notice-time <float> 3.0 0.0 to 10.0 set how long to leave distribution blocked notice on boxes N/A
    ri base-siphoning-protection <bool> True True or False whether inboxes within claimed land should prevent scanning outside the bounds of their lcb N/A
    ri dm False True or False toggle debug logging mode medium
    ri debug False True or False toggle debug logging mode (same as dm) medium
    • 📝 Settings like horizontal-range and vertical-range will actually update the block description for your players as well, so things remain clear and accurate. Changes made during runtime will even auto-update the block description for all online players, too!
    • 💾 Except for debug, these settings are retained in a file on the host system:
      • Windows: %AppData%\Roaming\7DaysToDie\Saves\MapName\GameName\robotic-inbox.json
      • Linux: ~/.local/.local/share/7DaysToDie/Saves/MapName/GameName/robotic-inbox.json

    Info

    What Happens to Leftovers?

    đŸ“Ļ Any items in the Inbox that are not able to be matched with another container will be left there until you have time to decide which container to store them in.

    How Would I Acquire a Robotic Inbox In-Game?

    đŸĒ Robotic Inbox can be purchased from a trader as soon as you start the game.

    đŸ› ī¸ Robotic Inboxes can also be crafted at the Workbench after reading enough about robotics to also craft a Tier 1 Junk Sledge.

    Ingredient Count
    resourceForgedIron 4
    resourceMetalPipe 3
    resourceMechanicalParts 6
    resourceElectricParts 8

    For Hosts/Admins: Performance Considerations

    This mod does a lot, so I would understand if you have any concern around how much load it would add to your server.

    Here are some things I kept in mind as I was designing and tweaking this mod:

    • Container data is already processed server-side in 7 days to die. This means that
      1. adjustments to storage are actually most performant on the server’s end rather than on the client’s end and…
      2. this approach to manipulating container data actually reduces networking calls vs any client-side mod that operates from the players’ ends
    • Container organization is run on each box within range via a concurrent loop. This ensures that as inboxes are scanning and updating your players’ containers, the server can still process other tasks and avoid zombie or crafting lag.

    Setup

    Without proper installation, this mod will not work as expected. Using this guide should help to complete the installation properly.

    If you have trouble getting things working, you can reach out to me for support via Support.

    Environment / EAC / Hosting Requirements

    Environment Compatible Does EAC Need to be Disabled? Who needs to install?
    Dedicated Server Yes no only server
    Peer-to-Peer Hosting Yes only on the host only the host
    Single Player Game Yes Yes self (of course)

    🤔 If you aren’t sure what some of this means, details steps are provided below to walk you through the setup process.

    Map Considerations for Installation or Uninstallation

    • Does adding this mod require a fresh map?
      • No! You can drop this mod into an ongoing map without any trouble.
    • Does removing this mod require a fresh map?
      • Since this mod adds new blocks, removing it from a map could cause some issues: previously placed robotic inbox blocks would now throw exceptions in your logs, at the very least.

    Windows PC (Single Player or Hosting P2P)

    â„šī¸ If you plan to host a multiplayer game, only the host PC will need to install this mod. Other players connecting to your session do not need to install anything for this mod to work 😉

    1. đŸ“Ļ Download the latest release by navigating to this link and clicking the link for robotic-inbox.zip
    2. 📂 Unzip this file to a folder named robotic-inbox by right-clicking it and choosing the Extract All... option (you will find Windows suggests extracting to a new folder named robotic-inbox – this is the option you want to use)
    3. đŸ•ĩī¸ Locate and create your mods folder (if missing): in another Windows Explorer window or tab, paste %APPDATA%\7DaysToDie into your address bar and, double-click your Mods folder to enter it.
      • If no Mods folder is present, you will first need to create it, then enter your Mods folder after that
    4. 🚚 Move your new robotic-inbox folder into your Mods folder by dragging & dropping or cutting/copying & pasting, whichever you prefer
    5. â™ģī¸ Stop the game if it’s currently running, then start the game again without EAC by navigating to your install folder and running 7DaysToDie.exe
      • running from Steam or other launchers usually starts 7 Days up with the 7DaysToDie_EAC.exe program instead, but running 7 Days directly will skip EAC startup

    Critical Reminders

    • âš ī¸ it is NECESSARY for the host to run with EAC disabled or the DLL file in this mod will not be able to run
    • 😉 other players DO NOT need to disable EAC in order to connect to your game session, so you don’t need to walk them through these steps
    • 🔑 it is also HIGHLY RECOMMENDED to add a password to your game session
      • while disabling EAC is 100% necessary (for P2P or single player) to run this mod properly, it also allows other players to run any mods they want on their end (which could be used to gain access to admin commands and/or grief you or your other players)
      • please note that dedicated servers do not have this limitation and can have EAC fully enabled; we have setup guides for dedicated servers as well, listed in the next 2 sections: Windows/Linux Installation (Server via FTP from Windows PC) and Linux Server Installation (Server via SSH)

    Windows/Linux Installation (Server via FTP from Windows PC)

    1. đŸ“Ļ Download the latest release by navigating to this link and clicking the link for robotic-inbox.zip
    2. 📂 Unzip this file to a folder named robotic-inbox by right-clicking it and choosing the Extract All... option (you will find Windows suggests extracting to a new folder named robotic-inbox – this is the option you want to use)
    3. đŸ•ĩī¸ Locate and create your mods folder (if missing):
      • Windows PC or Server: in another window, paste this address into to the address bar: %APPDATA%\7DaysToDie, then enter your Mods folder by double-clicking it. If no Mods folder is present, you will first need to create it, then enter your Mods folder after that
      • FTP: in another window, connect to your server via FTP and navigate to the game folder that should contain your Mods folder (if no Mods folder is present, you will need to create it in the appropriate location), then enter your Mods folder. If you are confused about where your mods folder should go, reach out to your host.
    4. 🚚 Move this new robotic-inbox folder into your Mods folder by dragging & dropping or cutting/copying & pasting, whichever you prefer
    5. â™ģī¸ Restart your server to allow this mod to take effect and monitor your logs to ensure it starts successfully:
      • you can search the logs for the word RoboticInbox; the name of this mod will appear with that phrase and all log lines it produces will be presented with this prefix for quick reference

    Linux Server Installation (Server via SSH)

    1. 🔍 SSH into your server and navigate to the Mods folder on your server
      • if you installed 7 Days to Die with LinuxGSM (which I’d highly recommend), the default mods folder will be under ~/serverfiles/Mods (which you may have to create)
    2. đŸ“Ļ Download the latest robotic-inbox.zip release from this link with whatever tool you prefer
      • example: wget https://github.com/jonathan-robertson/robotic-inbox/releases/latest/download/robotic-inbox.zip
    3. 📂 Unzip this file to a folder by the same name: unzip robotic-inbox.zip -d robotic-inbox
      • you may need to install unzip if it isn’t already installed: sudo apt-get update && sudo apt-get install unzip
      • once unzipped, you can remove the robotic-inbox download with rm robotic-inbox.zip
    4. â™ģī¸ Restart your server to allow this mod to take effect and monitor your logs to ensure it starts successfully:
      • you can search the logs for the word RoboticInbox; the name of this mod will appear with that phrase and all log lines it produces will be presented with this prefix for quick reference
      • rather than monitoring telnet, I’d recommend viewing the console logs directly because mod and DLL registration happens very early in the startup process and you may miss it if you connect via telnet after this happens
      • you can reference your server config file to identify your logs folder
      • if you installed 7 Days to Die with LinuxGSM, your console log will be under log/console/sdtdserver-console.log
      • I’d highly recommend using less to open this file for a variety of reasons: it’s safe to view active files with, easy to search, and can be automatically tailed/followed by pressing a keyboard shortcut so you can monitor logs in realtime
        • follow: SHIFT+F (use CTRL+C to exit follow mode)
        • exit: q to exit less when not in follow mode
        • search: /RoboticInbox [enter] to enter search mode for the lines that will be produced by this mod; while in search mode, use n to navigate to the next match or SHIFT+n to navigate to the previous match
    Visit original content creator repository
  • mayLCU

    mayLCU

    mayLCU is a C# library that provides a convenient way to interact with the League of Legends Client (LCU) through its HTTP API. It allows you to perform various actions such as making requests, retrieving data, and sending commands to the League Client.

    Usage

    Creating an instance of LCU

    To create an instance of LCU and connect to the League of Legends Client, you can use the provided factory methods:

    • HookRiotClient(): Connects to the Riot Client.
    • HookLeagueClient(): Connects to the League Client.
    • HookLeagueStore(LCU leagueClient): Connects to the League Store.

    Example:

    LCU lcu = LCU.HookLeagueClient();

    Making Requests

    mayLCU provides methods to make HTTP requests to the League of Legends Client API. You can use the following methods:

    • RequestAsync(string uri): Sends an asynchronous GET request to the specified URI and returns the response as a string.
    • RequestAsync(RequestMethod requestMethod, string uri, string payload = ""): Sends an asynchronous HTTP request with the specified method (GET, POST, PUT, DELETE, etc.), URI, and payload. Returns the response as a string.

    Example:

    string response = await lcu.RequestAsync("/lol-summoner/v1/current-summoner");

    Handling Responses

    You can also make requests that return dynamic objects instead of strings. The library provides methods for that purpose:

    • RequestDynamicAsync(string uri): Sends an asynchronous GET request to the specified URI and returns the response as a dynamic object.
    • RequestDynamicAsync(RequestMethod requestMethod, string uri, string payload = ""): Sends an asynchronous HTTP request with the specified method (GET, POST, PUT, DELETE, etc.), URI, and payload. Returns the response as a dynamic object.

    Example:

    dynamic data = await lcu.RequestDynamicAsync("/lol-summoner/v1/current-summoner");
    string summonerName = data.displayName;

    Synchronous Requests

    If you prefer to make synchronous requests instead of asynchronous ones, mayLCU provides equivalent synchronous methods:

    • Request(string uri): Sends a synchronous GET request to the specified URI and returns the response as a string.
    • Request(RequestMethod requestMethod, string uri, string payload = ""): Sends a synchronous HTTP request with the specified method (GET, POST, PUT, DELETE, etc.), URI, and payload. Returns the response as a string.

    Example:

    string response = lcu.Request("/lol-summoner/v1/current-summoner");

    Additional Information

    • IsConnected: Gets a value indicating whether the connection to the League Client is established.
    • Target: Gets the targeted process name (without the “Ux” suffix).

    Examples

    Here are some examples of how you can use mayLCU:

    // Hook League Client
    LCU lcu = LCU.HookLeagueClient();
    
    // Get the current summoner's name
    dynamic data = await lcu.RequestDynamicAsync("/lol-summoner/v1/current-summoner");
    string summonerName = data.displayName;
    Console.WriteLine($"Summoner Name: {summonerName}");

    // Hook League Client
    LCU leagueClient = LCU.HookLeagueClient();
    
    // Hook League Store using the leagueClient instance
    LCU leagueStoreClient = LCU.HookLeagueStore(leagueClient);
    
    // Example: Make a purchase request
    var httpPayload = $"{{\"accountId\":{accountId},\"items\":[{{\"inventoryType\":\"{type}\",\"itemId\":{itemId},\"ipCost\":null,\"rpCost\":{rpPrice},\"quantity\":1}}]}}"
    dynamic data = await leageuStoreClient.RequestDynamicAsync(RequestMethod.POST, "/storefront/v3/purchase?language=en_US", httpPayload)

    Disclaimer

    This project is not affiliated with or endorsed by Riot Games.

    Visit original content creator repository