Author: 7szni7bjkhrb

  • rexumee

    Rexumee

    License: Apache-2.0 Build Status

    A simple one page website for professional resume.

    Rexumee is using a minimalize Spring Boot stack with embedded web server, and a YAML file for the resume content. Just download (clone) the source code, build and test from local environment or build the Docker container for production deployment.

    Tech Stack and Library

    Edit Resume Content

    To edit resume content with your preferences, open the file: src/main/resources/application.yml , then edit under the resume sections. All the content structure are using YAML markup language convention. Check the customable fields from example or from the online demo.

    Build and Running the Application within Localhost

    Build

    mvnw clean package

    Running Application

    mvnw spring-boot:run

    Application will run on port 8080, access the URL http://localhost:8080 from web browser.

    Running Application (alternative)

    There is alternative for run the application. After build, go to target folder, make sure there is rexumee.jar file. Then, from that folder run:

    java -jar rexumee.jar

    Docker Deployment

    Make sure you have installed Docker & Docker Compose prior to deployment, if not:

    • Find out the documentation about Docker Installation at here.
    • How to install Docker-Compose here.

    From the project root folder, run:

    docker-compose up -d

    Check whether the deployment run properly:

    docker ps

    And you can access the webpage from port 80.

    Online Demo

    For online demo and preview, please visit here.

    Copyright and License

    Copyright 2018 Maikel Chandika (mkdika@gmail.com). Code released under the Apache License, Version 2.0. See LICENSE file.

    Visit original content creator repository https://github.com/mkdika/rexumee
  • macdb

    MAC Address Database

    MAC 地址厂商信息数据库

    功能

    通过 MAC 地址确定设备对应的制造商信息。

    TODO:

    • 优化数据库匹配策略
    • 补充更多的厂商信息
      • 厂商通用称呼
      • 厂商 LOGO
      • 品牌信息

    原理

    常见的 48 位 MAC 地址事实上是由 IEEE (电气电子工程师学会)规定的,原本的正式名称是 EUI-48 (48 Bits Extended Unique Identifiers)。而 EUI-48 中包含有 OUI (Organizationally Unique Identifier),这是 IEEE 管理并分配给相关硬件制造商的唯一标识符,通过对此部分的识别可以确定设备的制造商。

    EUI-48 和 EUI-64 标识符最普遍的用法是作为全球唯一的网络地址(有时称为 MAC 地址),如各种标准中规定的那样。例如,根据 IEEE 标准 802 ,EUI-48 通常被用作硬件接口的地址,历史上使用 “MAC-48″的名称。另一个例子是,根据 IEEE 标准 1588,EUI-64 可作为一个时钟的标识符。IEEE 标准 802 还规定了 EUI-64 用于 64 位全球唯一的网络地址。

    EUI-48 and EUI-64 identifiers are most commonly used as globally unique network addresses (sometimes called MAC addresses), as specified in various standards. For example, an EUI- 48 is commonly used as the address of a hardware interface according to IEEE Std 802, historically using the name “MAC-48”. As another example, an EUI- 64 may serve as the identifier of a clock, per IEEE Std 1588. IEEE Std 802 also specifies EUI-64 use for 64-bit globally unique network addresses.

    本项目以 IEEE 官网作为数据源,整合数据库并每日自动同步。

    使用说明

    支持的数据格式

    字段说明

    字段名 字段含义 示例
    registry 分配的 OUI 类型 MA-L
    assignment IEEE 分配的组织唯一识别码 002272
    organization_name 制造商的注册名称 American Micro-Fuel Device Corp.
    organization_address 制造商的注册地址 2181 Buchanan Loop Ferndale WA US 98248

    查询步骤

    1. 取 MAC 地址前 24 位(对应到常用的杠分十六进制表示就是前三组的 6 个 十六进制字符,如 AA-BB-CC-DD-EE-FF 的 AABBCC)
      与数据库的 assignment 字段进行精确匹配。
    2. 若匹配结果的 organization_name 字段为 IEEE Registration Authority ,则继续进行下一步;
      否则直接返回当前匹配结果。
    3. 取 MAC 地址前 28 位(对应到常用的杠分十六进制表示就是前三组的 7 个 十六进制字符,如 AA-BB-CC-DD-EE-FF 的 AABBCCD)
      与数据库的 assignment 字段进行精确匹配。
    4. 若匹配结果的 organization_name 字段为 IEEE Registration Authority ,则继续进行下一步;否则直接返回当前匹配结果。
    5. 取 MAC 地址前 36 位(对应到常用的杠分十六进制表示就是前三组的 7 个 十六进制字符,如 AA-BB-CC-DD-EE-FF 的 AABBCCD)
      与数据库的 assignment 字段进行精确匹配。
    6. 若有结果直接返回,无结果返回空。

    官方查询页面

    https://regauth.standards.ieee.org/standards-ra-web/pub/view.html

    官方数据发布

    1. MAC Address Block Large (MA-L) TXT CSV
    2. MAC Address Block Medium (MA-M) TXT CSV
    3. MAC Address Block Small (MA-S) TXT CSV

    官方匹配指导

    如果前 24 位与分配给 IEEE RA 的 OUI 相匹配,那么对前 28 位或 36 位的搜索可能会显示出 MA-M 或 MA-S 的分配。
    如果在 MA-S 搜索中没有发现 OUI-36,那么对前 24 位或 28 位的搜索可能会发现一个 MA-L 或 MA-M 分配,OUI-36 是由分配块的一个成员创建的。

    If the first 24 bits match an OUI assigned to the IEEE RA, then a search of the first 28 or 36 bits may reveal an MA-M or MA-S assignment. If the OUI-36 is not found in an MA-S search, then a search of the first 24 or 28 bits may reveal an MA-L or MA-M assignment from which the OUI-36 has been created from a member of the assigned block.
    最终的查询结果也不总是完全准确的哦!

    请您注意,所列的公司和编号在产品实施中可能并不总是很明显。一些制造商将部件制造分包出去,另一些制造商在其产品中包括注册公司的所有 MAC(MA-L、MA-M、MA-S)。

    Your attention is called to the fact that the firms and numbers listed may not always be obvious in product implementation. Some manufacturers subcontract component manufacture and others include registered firms’ All MAC (MA-L, MA-M, MA-S) in their products.

    Visit original content creator repository
    https://github.com/WH-2099/macdb

  • bitcoind-exporter

    bitcoind-exporter

    NPM package version Docker Build Status

    Bitcoind metrics Prometheus exporter.

    bitcoind-exporter is compatible with most bitcoin forks.

    Produce blockchain, wallet and addresses metrics. Most relevant metrics are:

    • wallet total balance
    • wallet version
    • if the wallet is unlocked
    • available (spendable) balance for each managed addresses (and watched addresses)
    • best block time and index

    Usage

    Edit the .env environment file to suit your needs and run:

    npm start
    

    bitcoind-exporter uses the bitcoind JSON-RPC API under the hood and need those credentials:

    rpcuser=test
    rpcpassword=1cf98b57-5i09-4fa1-9c07-2e28cb2cb47b
    

    Usage with other wallets

    The following environment variables are available, that should be enough for any bitcoin forks:

    ticker=DASH
    rpcuser=test
    rpcpassword=1cf98b57-5i09-4fa1-9c07-2e28cb2cb47b
    rpchost=127.0.0.1
    rpcport=9999
    rpcscheme=http
    

    Docker

    Using environment variables:

    docker run -d --restart always --name my-exporter -p 9439:9439 -e "rpcuser=myrpcuser" -e "rpcpassword=myrpcpassword" -e "rpchost=my-wallet" --link my-wallet lepetitbloc/bitcoind-exporter
    

    Using a .env file:

    docker run -d --restart always --name my-exporter -p 9439:9439 -v /path/to/my/conf:/app/.env --link my-wallet lepetitbloc/bitcoind-exporter
    

    An easy hack could be to directly use your wallet conf to feed your exporter env:

    docker run --name my-exporter -p 9439:9439 -v /path/to/my/conf:/app/.env --link my-wallet lepetitbloc/bitcoind-exporter
    

    Example metrics

    When visiting the metrics URL http://localhost:9439/metrics the following metrics are produced:

    # HELP bitcoind_best_block_index The block height or index
    # TYPE bitcoind_best_block_index gauge
    bitcoind_best_block_index 69019
    
    # HELP bitcoind_best_block_timestamp_seconds The block time in seconds since epoch (Jan 1 1970 GMT)
    # TYPE bitcoind_best_block_timestamp_seconds gauge
    bitcoind_best_block_timestamp_seconds 1522746083
    
    # HELP bitcoind_chain_difficulty The proof-of-work difficulty as a multiple of the minimum difficulty
    # TYPE bitcoind_chain_difficulty gauge
    bitcoind_chain_difficulty 3511060552899.72
    
    # HELP bitcoind_wallet_version the wallet version
    # TYPE bitcoind_wallet_version gauge
    bitcoind_wallet_version{ticker="BTC"} 71000
    
    # HELP bitcoind_wallet_balance_total the total balance of the wallet
    # TYPE bitcoind_wallet_balance_total gauge
    bitcoind_wallet_balance_total{status="unconfirmed"} 2.7345
    bitcoind_wallet_balance_total{status="immature"} 0
    bitcoind_wallet_balance_total{status="confirmed"} 42.73453501
    
    # HELP bitcoind_wallet_transactions_total the total number of transactions in the wallet
    # TYPE bitcoind_wallet_transactions_total gauge
    bitcoind_wallet_transactions_total 77
    
    # HELP bitcoind_wallet_key_pool_oldest_timestamp_seconds the timestamp of the oldest pre-generated key in the key pool
    # TYPE bitcoind_wallet_key_pool_oldest_timestamp_seconds gauge
    bitcoind_wallet_key_pool_oldest_timestamp_seconds 1516231938
    
    # HELP bitcoind_wallet_key_pool_size_total How many new keys are pre-generated.
    # TYPE bitcoind_wallet_key_pool_size_total gauge
    bitcoind_wallet_key_pool_size_total 1000
    
    # HELP bitcoind_wallet_unlocked_until_timestamp_seconds the timestamp that the wallet is unlocked for transfers, or 0 if the wallet is locked
    # TYPE bitcoind_wallet_unlocked_until_timestamp_seconds gauge
    bitcoind_wallet_unlocked_until_timestamp_seconds 0
    
    # HELP bitcoind_wallet_pay_tx_fee_per_kilo_bytes the transaction fee configuration, set in Unit/kB
    # TYPE bitcoind_wallet_pay_tx_fee_per_kilo_bytes gauge
    bitcoind_wallet_pay_tx_fee_per_kilo_bytes 0
    
    # HELP bitcoind_address_balance_total address balance
    # TYPE bitcoind_address_balance_total gauge
    bitcoind_address_balance_total{address="1FxZE15d8bt381EuDckdDdp7vw8FUiLzu6"} 41.00683469
    bitcoind_address_balance_total{address="1QAm6J6jLmcm7ce87ujrSdmjPNX9fgRUYZ"} 1.72770032
    

    Demo

    You can test this exporter with docker-compose:

    docker-compose up
    

    Resources

    Licence

    MIT

    Visit original content creator repository https://github.com/LePetitBloc/bitcoind-exporter
  • database-scala

    Base de datos (Scala)

    Conexion a la base de datos

    1. crear una carpeta dentro de main con un archivo .conf
    src/main/resources/application.conf
    1. El contenido del archivo debe ser las credenciales

    db {
      user = "root"
      password = "pontificie"
      urlMaestro = "jdbc:mariadb://localhost:3307/proyecto_aula"
      urlEsclavo = "jdbc:mariadb://localhost:3307/proyecto_aula"
    }

    Instalación de Docker, MariaDB y Docker Compose en Ubuntu

    Paso 1: Instalar Docker

    1. Actualiza tu lista de paquetes existente:
    sudo apt-get update
    1. Instala los paquetes necesarios que permiten a apt usar paquetes a través de HTTPS:
    sudo apt-get install apt-transport-https ca-certificates curl software-properties-common
    1. Agrega la clave GPG para el repositorio oficial de Docker a tu sistema:
    curl -fsSL https://download.docker.com/linux/ubuntu/gpg | sudo apt-key add -
    1. Agrega el repositorio de Docker a las fuentes de APT:
    sudo add-apt-repository "deb [arch=amd64] https://download.docker.com/linux/ubuntu $(lsb_release -cs) stable"
    1. Actualiza el paquete de la base de datos con los paquetes Docker del repositorio recién agregado:
    sudo apt-get update
    1. Asegúrate de que vas a instalar desde el repositorio de Docker en lugar del repositorio predeterminado de Ubuntu:
    apt-cache policy docker-ce
    1. Finalmente, instala Docker:
    sudo apt-get install -y docker-ce
    1. Docker ahora debería estar instalado, el demonio iniciado y el proceso habilitado para iniciar en el arranque. Verifica que esté funcionando:
    sudo systemctl status docker

    Paso 2: Instalar MariaDB

    1. Actualiza tu lista de paquetes existente:
    sudo apt-get update
    1. Luego instala MariaDB con el siguiente comando:
    sudo apt-get install mariadb-server
    1. Asegúrate de que MariaDB esté funcionando con el comando systemctl start:
    sudo systemctl start mariadb.service

    Paso 3: Instalar Docker Compose

    1. Descarga la versión más reciente estable de Docker Compose ejecutando este comando:
    sudo curl -L "https://github.com/docker/compose/releases/download/1.29.2/docker-compose-$(uname -s)-$(uname -m)" -o /usr/local/bin/docker-compose
    1. A continuación, aplica permisos ejecutables al binario:
    sudo chmod +x /usr/local/bin/docker-compose

    Configuración de Replicación en MariaDB

    1. Configuración del Maestro:

    # Editar el archivo de configuración de MariaDB
    sudo nano /etc/mysql/mariadb.conf.d/50-server.cnf
    
    # Establecer el identificador único del servidor (server_id)
    server_id = 1
    1. Configuración del Esclavo:

    # Editar el archivo de configuración de MariaDB
    sudo nano /etc/mysql/mariadb.conf.d/50-server.cnf
    
    # Establecer el identificador único del servidor (server_id)
    server_id = 2
    1. Creación de un Usuario de Replicación:

    # Acceder a MariaDB como usuario root
    mysql -u root -p
    
    # Crear un usuario de replicación y asignarle los privilegios necesarios
    CREATE USER 'replication_user'@'%' IDENTIFIED BY 'password';
    GRANT REPLICATION SLAVE ON *.* TO 'replication_user'@'%';
    1. Obtención de la Información del Maestro:

    # Acceder a MariaDB como usuario root
    mysql -u root -p
    
    # Obtener la posición binaria actual y la identificación del maestro
    SHOW MASTER STATUS;
    1. Configuración del Esclavo con la Información del Maestro:

    # Acceder a MariaDB como usuario root en el esclavo
    mysql -u root -p
    
    # Configurar el esclavo usando la información del maestro (posición binaria y identificación)
    CHANGE MASTER TO MASTER_HOST='master_ip', MASTER_USER='replication_user', MASTER_PASSWORD='password', MASTER_LOG_FILE='binlog_file', MASTER_LOG_POS=binlog_position;
    1. Iniciar la Replicación en el Esclavo:

    # Acceder a MariaDB como usuario root en el esclavo
    mysql -u root -p
    
    # Iniciar la replicación en el esclavo
    START SLAVE;

    Ejecución del Proyecto Scala en Docker

    1. Construir la Imagen Docker:

    # Construir la imagen Docker
    docker build -t myproject .
    1. Ejecutar el Contenedor Docker:

    # Ejecutar el contenedor Docker
    docker run -d -p 8080:8080 myproject

    Docker compose

    Docker-compose

    Docker maestro

    Dockerfile

    Conexión a la base de datos

    Conexión

    Dependencias del proyecto

    Dependencias

    Visit original content creator repository
    https://github.com/Proyecto-Boston/database-scala

  • free-programming-books

    List of Free Learning Resources In Many Languages

    Awesome  License: CC BY 4.0  Hacktoberfest 2023 stats

    Search the list at https://ebookfoundation.github.io/free-programming-books-search/ https://ebookfoundation.github.io/free-programming-books-search/.

    This page is available as an easy-to-read website. Access it by clicking on https://ebookfoundation.github.io/free-programming-books/.

    Intro

    This list was originally a clone of StackOverflow – List of Freely Available Programming Books with contributions from Karan Bhangui and George Stocker.

    The list was moved to GitHub by Victor Felder for collaborative updating and maintenance. It has grown to become one of GitHub’s most popular repositories.

    GitHub repo forks  GitHub repo stars  GitHub repo contributors
    GitHub org sponsors  GitHub repo watchers  GitHub repo size

    The Free Ebook Foundation now administers the repo, a not-for-profit organization devoted to promoting the creation, distribution, archiving, and sustainability of free ebooks. Donations to the Free Ebook Foundation are tax-deductible in the US.

    How To Contribute

    Please read CONTRIBUTING. If you’re new to GitHub, welcome! Remember to abide by our adapted from Contributor Covenant 1.3 Code of Conduct too (translations also available).

    Click on these badges to see how you might be able to help:

    GitHub repo Issues  GitHub repo Good Issues for newbies  GitHub Help Wanted issues
    GitHub repo PRs  GitHub repo Merged PRs  GitHub Help Wanted PRs

    How To Share

    Resources

    This project lists books and other resources grouped by genres:

    Books

    English, By Programming Language

    English, By Subject

    Other Languages

    Cheat Sheets

    Free Online Courses

    Interactive Programming Resources

    Problem Sets and Competitive Programming

    Podcast – Screencast

    Free Podcasts and Screencasts:

    Programming Playgrounds

    Write, compile, and run your code within a browser. Try it out!

    Translations

    Volunteers have translated many of our Contributing, How-to, and Code of Conduct documents into languages covered by our lists.

    You might notice that there are some missing translations here – perhaps you would like to help out by contributing a translation?

    License

    Each file included in this repository is licensed under the CC BY License.

    Visit original content creator repository https://github.com/EbookFoundation/free-programming-books
  • free-programming-books

    List of Free Learning Resources In Many Languages

    Awesome  License: CC BY 4.0  Hacktoberfest 2023 stats

    Search the list at https://ebookfoundation.github.io/free-programming-books-search/ https://ebookfoundation.github.io/free-programming-books-search/.

    This page is available as an easy-to-read website. Access it by clicking on https://ebookfoundation.github.io/free-programming-books/.

    Intro

    This list was originally a clone of StackOverflow – List of Freely Available Programming Books with contributions from Karan Bhangui and George Stocker.

    The list was moved to GitHub by Victor Felder for collaborative updating and maintenance. It has grown to become one of GitHub’s most popular repositories.

    GitHub repo forks  GitHub repo stars  GitHub repo contributors
    GitHub org sponsors  GitHub repo watchers  GitHub repo size

    The Free Ebook Foundation now administers the repo, a not-for-profit organization devoted to promoting the creation, distribution, archiving, and sustainability of free ebooks. Donations to the Free Ebook Foundation are tax-deductible in the US.

    How To Contribute

    Please read CONTRIBUTING. If you’re new to GitHub, welcome! Remember to abide by our adapted from Contributor Covenant 1.3 Code of Conduct too (translations also available).

    Click on these badges to see how you might be able to help:

    GitHub repo Issues  GitHub repo Good Issues for newbies  GitHub Help Wanted issues
    GitHub repo PRs  GitHub repo Merged PRs  GitHub Help Wanted PRs

    How To Share

    Resources

    This project lists books and other resources grouped by genres:

    Books

    English, By Programming Language

    English, By Subject

    Other Languages

    Cheat Sheets

    Free Online Courses

    Interactive Programming Resources

    Problem Sets and Competitive Programming

    Podcast – Screencast

    Free Podcasts and Screencasts:

    Programming Playgrounds

    Write, compile, and run your code within a browser. Try it out!

    Translations

    Volunteers have translated many of our Contributing, How-to, and Code of Conduct documents into languages covered by our lists.

    You might notice that there are some missing translations here – perhaps you would like to help out by contributing a translation?

    License

    Each file included in this repository is licensed under the CC BY License.

    Visit original content creator repository https://github.com/EbookFoundation/free-programming-books
  • magnesium

    Magnesium is a simple (~350 lines) header-only kernel implementing CSP-like
    computation model with actors, messages and communication queues for deeply
    embedded systems.
    It maps actors to unused interrupt vectors and utilizes interrupt controller
    hardware for scheduling. All functions have constant interrupt locking time.

    Features

    • Preemptive multitasking
    • Easy integration into any project
    • Hardware-assisted scheduling
    • Unlimited number of actors and queues
    • Zero-copy message-passing communication
    • Timer facility
    • Multicore support
    • Hard real-time capability
    • ARMv6-M, ARMv7-M, ARMv8-M are supported at now

    API description

    Please note that the kernel itself does not initialize the interrupt
    controller, so it is user responsibility to properly set vector priorities
    before the first actor is created.

    Initialization of global context containing runqueues. Don’t forget to
    declare g_mg_context as a global variable with type mg_context_t.

        void mg_context_init(void);
    

    Select the next actor to run. Must be called inside vectors designated to
    actor execution.

        void mg_context_schedule(unsigned int this_vector);
    

    Message queue initialization. A queue is always empty after init.

        void mg_queue_init(struct mg_queue_t* q);
    

    Message pool initialization. Each message must contain a header with
    type mg_message_t as its first member.

        void mg_message_pool_init(struct mg_message_pool_t* pool, void* mem, size_t len, size_t msg_sz);
    

    Example:

        static struct { 
            mg_message_t header; 
            unsigned int payload;
        } msgs[10];
    
        static struct mg_message_pool_t pool;
        mg_message_pool_init(&pool, msgs, sizeof(msgs), sizeof(msgs[0]));
    

    Actor object initialization. Actor is a stackless run-to-completion function.

        void mg_actor_init(
            struct mg_actor_t* actor, 
            struct mg_queue_t* (*func)(struct mg_actor_t* self, struct mg_message_t* msg),
            unsigned int vect, 
            struct mg_queue_t* q);
    

    The actor will be implicitly subscribed to the queue specified. If q == NULL
    then actor’s function will be called inside init to obtain queue pointer.
    The rule of thumb here is if you want your actor to be always called with a
    valid message pointer including the first call then use ‘subscription on init’.
    Otherwise, if you prefer async-style coding with internal explicit or implicit
    state machine then use NULL here, the actor will be called on init with no
    message to init the state machine.

    Note: actor’s default CPU is the one where it was initialized. This behavior
    may be overridden by explicitly set actor.cpu = N. All actor activations will
    happen on that CPU.

    Message management. Alloc returns void* to avoid explicit typecasts to
    specific message type. It may be safely assumed that this pointer always
    points to the message header. If a message pool returned NULL it may be
    typecasted to a queue. If an actor subscribes to that queue it will be
    activated when someone returns a message into the pool. This is the way
    to deal with limited memory pools.

        void* mg_message_alloc(struct mg_message_pool_t* pool);
        void mg_message_free(struct mg_message_t* msg);
    

    Sending message to a queue. Queues have no internal storage, they contain
    just head of linked list so sending cannot fail, no need for return status.

        void mg_queue_push(struct mg_queue_t* q, struct mg_message_t* msg);
    

    Synchronous polling of a message queue.

        struct mg_message_t* mg_queue_pop(struct mg_queue_t* q, NULL);
    

    If the system has a tick source you can also use timing facility. The tick
    function has to be called periodically.

        void mg_context_tick(void);
    

    Actor execution can be delayed by specified number of ticks by returning the
    special value:

        return mg_sleep_for(<delay in ticks>, self);
    

    The calling actor will be activated with zero-message when the timeout is reached.

    Warning! It is expected that interrupts are enabled on call of all these functions.

    Protothreads

    Since actors are stackless all the state should be maintained by the user.
    It usually leads to state machines inside the actor function as C does not
    support async/await-like functionality. To simplify writing these types of
    actors three additional macros are provided: MG_ACTOR_START, MG_ACTOR_END
    and MG_AWAIT. They may be used to write actors like protothreads:

        struct mg_queue_t* actor_fn(struct mg_actor_t* self, struct mg_message_t* msg) {
            MG_ACTOR_START;
    
            for (;;) {
                MG_AWAIT(mg_sleep_for(100, self)); // waiting for 100 ticks
                ...
                MG_AWAIT(queue); // awaiting for messages in the queue
            }
    
            MG_ACTOR_END;
        }
    

    These macros are optional and provided only for convenience.

    How to use

    1. Include magnesium.h into your application.
    2. Setup include directories to point to appropriate porting header (mg_port.h).
    3. Add global variable g_mg_context to some file in your project.
    4. Initialize interrupt controller registers and priorities, enable irqs.
    5. Initialize context, message pools, queues and objects in your main().
    6. Put calls to ‘schedule’ in interrupt handlers associated with actors.
    7. Put calls to ‘tick’ in interrupt handler of tick source.
    8. Put calls to alloc/push in interrupt handlers associated with devices.
    9. Implement message handling code in actor’s functions.

    Demo

    The demo is a toy example blinking the LED. Use make to build. It is expected
    that arm-none-eabi- compiler is available via the PATH.
    Most demos use make for building. For Raspberry Pi Pico 2 SDK version use

    cmake -DPICO_BOARD=pico2
    make
    

    Why ‘Magnesium’

    RTOS-less systems are often called ‘bare-metal’ and magnesium is the ‘key
    component of strong lightweight metal alloys’. Also, magnesium is one of
    just seven ‘simple’ metals contaning only s- and p- electron orbitals.

    Visit original content creator repository
    https://github.com/romanf-dev/magnesium

  • Python-Games

    Python Games

    GitHub stars GitHub license GitHub forks Visits Badge Created Badge Updated Badge

    All games made in Python – Pygame, Tkinter and Turtle

    1. Connect 4

    • For making this game I have used the pygame module. Connect4 is a two-player connection game in which the players take turns dropping one colored disc from the top into a seven-column, six-row vertically suspended grid. The pieces fall straight down, occupying the lowest available space within the column. The objective of the game is to be the first to form a horizontal, vertical, or diagonal line of four of one’s own discs.
    • The pieces are to be manoeuvred over the top of the board using your mouse and simply ‘left-click’ to drop the piece in the particular block.

    2. Pong

    • Pong is one of the earliest arcade video games. It is a table tennis sports game featuring simple two-dimensional graphics. The Game has been designed using the turtle module.
      • Controls –
      1. The paddle for player A can be moved using w(up), s(down), a(left) and d(right).
      2. The paddel for player B can be moved using up, down, right and left arrow keys.

    3. Snake 2D

    • Snake is the common name for a video game concept where the player maneuvers a line which grows in length, with the line itself being a primary obstacle. This game too has been made using the turtle module.
      • Controls -The snake is to be maneuvered using the arrow keys up, down, left and right.

    4. To Do App

    • The goal was to create a simple “To do list” to keep track of the jobs you need to finish. The program has been designed using the Tkinter module (Standard GUI Library for Python).
    • This To-do-List program can perform jobs such as Adding Tasks, Deleting tasks, Sorting the tasks (Either in Ascending or Descending order), Displaying a Random Task, and Displaying the total no. of tasks currently stored in the To-do-List.

    5. Space Invaders

    • The Space_Invaders is a classic arcade game. The goal is to defeat wave after wave of descending aliens with a horizontally moving laser to earn as many points as possible.
    • Each time the bullet hits the enemy, the enemy is destroyed and the player gets a point.
    • I have added sounds for bullet fire and the bullet hitting the enemy.
      • Controls -The player can maneuver the battleship using arrow key ‘up’, ‘down’, ‘left’ and ‘right’ and can fire bullets using spacebar.

    6. Dodge

    • The Dodger game has the player control a small person (which we call the player’s character) who must dodge a whole bunch of baddies that fall from the top of the screen. The longer the player can keep dodging the baddies, the higher the score they will get.
      • Controls -The player can maneuver using the arrow keys ‘up’, ‘down’, ‘left’ and ‘right’.

    7. Client-Server Chat application

    • Created a chat server which takes in the name of the host and once the connection is set we can chat between the client and the server. The two files need to be run simultaneously.

    8. Bulk Event Certificate Generation script

    • Generate certificates in Bulk using a particular PNG file and names from excel file. The script uses an excel file with names of participants to create certificates with their name on it. Extremely helpful for those planning to run large scale events online.

    Before and After running the script

    Visit original content creator repository https://github.com/afrozchakure/Python-Games
  • BERT-keras

    Status: Archive (code is provided as-is, no updates expected)

    BERT-keras

    Keras implementation of Google BERT(Bidirectional Encoder Representations from Transformers) and OpenAI’s Transformer LM capable of loading pretrained models with a finetuning API.

    Update: With TPU support both for inference and training like this colab notebook thanks to @HighCWu

    How to use it?

    # this is a pseudo code you can read an actual working example in tutorial.ipynb or the colab notebook
    text_encoder = MyTextEncoder(**my_text_encoder_params) # you create a text encoder (sentence piece and openai's bpe are included)
    lm_generator = lm_generator(text_encoder, **lm_generator_params) # this is essentially your data reader (single sentence and double sentence reader with masking and is_next label are included)
    task_meta_datas = [lm_task, classification_task, pos_task] # these are your tasks (the lm_generator must generate the labels for these tasks too)
    encoder_model = create_transformer(**encoder_params) # or you could simply load_openai() or you could write your own encoder(BiLSTM for example)
    trained_model = train_model(encoder_model, task_meta_datas, lm_generator, **training_params) # it does both pretraing and finetuning
    trained_model.save_weights('my_awesome_model') # save it
    model = load_model('my_awesome_model', encoder_model) # load it later and use it!

    Notes

    • The general idea of this library is to use OpenAI’s/Google’s pretrained model for transfer learning
    • In order to see how the BERT model works, you can check this colab notebook
    • In order to be compatible with both BERT and OpenAI I had to assume a standard ordering for the vocabulary, I’m using OpenAI’s so in the loading function of BERT there is a part to change the ordering; but this is an implementation detail and you can ignore it!
    • Loading OpenAI model is tested with both tensorflow and theano as backend
    • Loading a Bert model is not possible on theano backend yet but the tf version is working and it has been tested
    • Training and fine-tuning a model is not possible with theano backend but works perfectly fine with tensorflow
    • You can use the data generator and task meta data for most of the NLP tasks and you can use them in other frameworks
    • There are some unit tests for both dataset and transformer model (read them if you are not sure about something)
    • Even tough I don’t like my keras code, it’s readable 🙂
    • You can use other encoders, like LSTM or BiQRNN for training if you follow the model contract (have the same inputs and outputs as transformer encoder)
    • Why should I use this instead of the official release?, first this one is in Keras and second it has a nice abstraction over token-level and sentence-level NLP tasks which is framework independent
    • Why keras? pytorch version is already out! (BTW you can use this data generator for training and fine-tuning that model too)
    • I strongly advise you to read the tutorial.ipynb (I don’t like notebooks so this is a poorly designed notebook, but read it anyway)

    Important code concepts

    • Task: there are two general tasks, sentence level tasks(like is_next and sentiment analysis), and token level tasks(like PoS and NER)
    • Sentence: a sentence represents an example with it’s labels and everything, for each task it provides a target(single one for sentence level tasks and per token label for token level tasks) and a mask, for token levels we need to not only ignore paddings but also we might want to predict class on first char of a word (like the BERT paper(first piece of a multi piece word)) and for sentence levels we want a extraction point(like start token in BERT paepr)
    • TaskWeightScheduler: for training we might want to start with language modeling and smoothly move to classification, they can be easily implemented with this class
    • attention_mask: with this you can 1.make your model causal 2.ignore paddings 3.do your crazy idea 😀
    • special_tokens: pad, start, end, delimiter, mask

    Ownership

    Neiron

    Visit original content creator repository
    https://github.com/Separius/BERT-keras

  • basquete

    Atividade Prática Supervisionada – Lógica de Programação

    Dados Estatísticos – Time de Basquete

    Supervisionado por: Simone de Abreu e Igor Oliveira Borges

    Descrição da Atividade

    A ideia da atividade é fazer um programa que implemente um relatório estatístico dos jogadores de um time de basquete de uma temporada. Esse relatório é importante para o técnico definir se seu time está com índices de desenvolvimento bons em relação aos demais times da temporada.

    Sabe-se que em um time de basquete são necessários 5 jogadores em quadra, podendo ter até outros 5 jogadores reservas, contabilizando 10 jogadores por time no total.

    Para cada um dos jogadores do time, seu programa deve ler o nome e a altura. Usar um vetor de Strings para armazenar os nomes e um vetor para armazenar as alturas.

    Após a entrada dos dados dos 10 jogadores, o programa deve apresentar o seguinte menu de opções:

    ======== TIME DE BASQUETE ========
    1 – Dados dos jogadores
    2 – Média de alturas
    3 – Desvio padrão das alturas
    4 – Maior e Menor altura
    5 – Mediana das alturas
    6 – Finalizar
    
    Digite uma opção:
    

    Itens do Menu

    1. Exibir o nome e a altura de cada jogador do time.
    2. Calcular e apresentar a média das alturas do time de basquete.
    3. Apenas se a média já estiver sido calculada, calcular o desvio padrão
      que é dado pela fórmula: (Σ(alturasˆ2) + total de alturas) - mediaˆ2.
    4. Encontrar o jogador mais alto e o jogador mais baixo do time. Apresentar o nome do jogador e a sua altura.
    5. Calcula a mediana das alturas. A mediana é o elemento central de uma lista ordenada. Caso o conjunto de dados seja par, então a mediana é a média dos dois valores centrais. Pesquise como “ordenar vetor em JAVA”.
      Lembre-se que o vetor de nomes também devem ser alterados, para tanto, pesquise a função de cópia de strings – clone().
    6. Finaliza a execução do programa.

    Regras e Restrições

    ara o correto desenvolvimento do programa algumas regras e restrições devem ser cumpridas:

    1. A tela de início do programa deve apresentar (System.out.println()) o nome completo e o RA de cada integrante do grupo!
    2. A altura de cada jogador, não pode ser 0 negativa. Caso seja digitado um valor inválido, o programa deverá solicitar um novo valor.
    3. Para qualquer uma das regras listadas, o programa não pode ser finalizado. O programa deve fazer as validações de entrada e somente prosseguir quando os dados de entrada forem válidos.
    4. O programa somente deve ser finalizado ao escolher o item 6 do menu.
    5. Pode utilizar o conceito de métodos – pesquisar nos livros de referência.
    6. Para armazenar os nomes dos jogadores, o grupo deve pesquisar o
      conceito de matrizes (“vetor de Strings em Java” no google).

    Entregáveis

    O trabalho deve ser desenvolvido em equipes de no mínimo 3 e no máximo 5 alunos.
    Cada equipe deve realizar a entrega do projeto compactado, no formato ZIP, pelo Blackboard.

    Critérios de Avaliação

    Cada grupo terá o seu trabalho avaliado utilizando os seguintes critérios:

    • Correta implementação e funcionamento do algoritmo.
    • Legibilidade (comentários e organização).
    • Nomeação adequada de variáveis.
    • Pontualidade na entrega no Blackboard.

    Visit original content creator repository
    https://github.com/rafifos/basquete