Blog

  • NodeWrecker

    NodeWrecker

    Stress test your cluster under sporadic high cpu, memory, or disk load.

    Build image

    make image

    For Raspberry pi:
    make image-pi

    Build binary

    make build

    For Raspberry pi:
    make build-pi

    Run via docker

    docker start jaeg/nodewrecker:latest --threads=4 --escalate=true --abuse-memory=true --chaos

    Raspberry pi:
    docker start jaeg/nodewrecker:latest-pi --threads=4 --escalate=true --abuse-memory=true --chaos

    Install via helm

    helm upgrade --install node-wrecker ./helm-chart/

    Raspberry pi:

    • Update appVersion in helm-chart/chart.yaml from latest to latest-pi
    • helm upgrade --install pi-wrecker ./helm-chart/

    Flags

    • chaos
      • default:false
      • Enables chaos mode
    • threads
      • default:4
      • Number of threads to run
    • sleep
      • default:1
      • milliseconds to sleep
    • escalate
      • default:false
      • Keep creating threads
    • escalate-rate
      • default:1000
      • milliseconds between creating new threads
    • string-length
      • default:1000
      • length of randomly generated string
    • abuse-memory
      • default:false
      • if true nodewrecker will store all generated values in memory
    • min-duration
      • default: 10
      • minimum seconds a test lasts
    • max-duration
      • default: 60
      • max seconds a test lasts
    • max-deay
      • default: 10
      • max seconds between tests
    • min-delay
      • defaults: 10
      • min seconds between tests
    • verbose
      • defaults: false
      • output everything from threads
    • output
      • defaults: false
      • write output from threads to txt files
    • output-dir
      • defaults: ./
      • directory to put output from threads

    Visit original content creator repository
    https://github.com/jaeg/NodeWrecker

  • jetnet

    JetNet

    What models or features are you interested in seeing in JetNet? Let us know!

    JetNet is a collection of models, datasets, and tools that make it easy to explore neural networks on NVIDIA Jetson (and desktop too!). It can easily be used and extended with Python.

    Check out the documentation to learn more and get started!

    It’s easy to use

    JetNet comes with tools that allow you to easily build, profile and demo models. This helps you easily try out models to see what is right for your application.

    jetnet demo jetnet.trt_pose.RESNET18_HAND_224X224_TRT_FP16

    It’s implementation agnostic

    JetNet has well defined interfaces for tasks like classification, detection, pose estimation, and text detection. This means models have a familiar interface, regardless of which framework they are implemented in. As a user, this lets you easily use a variety of models without re-learning a new interface for each one.

    class PoseModel:
        
        def get_keypoints(self) -> Sequence[str]:
            raise NotImplementedError
    
        def get_skeleton(self) -> Sequence[Tuple[int, int]]:
            raise NotImplementedError
    
        def __call__(self, index: Image) -> PoseSet:
            raise NotImplementedError

    It’s highly reproducible and configurable

    JetNet uses well-defined configurations to explicitly describe all the steps needed to automatically re-produce a model. This includes steps like downloading weights, downloading calibration data and optimizing with TensorRT, that often aren’t captured in open-source model definitons. These configurations are defined with pydantic using JSON serializable so they can be easily validated, modified, exported, and re-used.

    For example, the following models, which include TensorRT optimization can be re-created with a single line

    from jetnet.yolox import YOLOX_NANO_TRT_FP16
    
    model = YOLOX_NANO_TRT_FP16.build()

    It’s easy to set up

    JetNet comes with pre-built docker containers for Jetson and Desktop. In case these don’t work for you, manual setup instructions are provided. Check out the documentation for details.

    Get Started!

    Head on over the documentation to learn more and get started!

    Visit original content creator repository https://github.com/NVIDIA-AI-IOT/jetnet
  • linkerd-cert-manager-identity

    LinkerD cert-manager identity

    This project is designed as a drop in replacement for default identity provider, that issues certificates from local
    cert-manager installation.

    With a few changes to parameters, to take Issuer configuration into account.

    Pre-requirements

    A cert-manager must be installed, and Issuer must be configured and working beforehand.

    Note: service account related to this identity controller requires Role/RoleBindings that allow this service to
    create new CertificateRequests and Events. See role.yaml for details.

    Usages

    Self-signed

    A basic example can be found in examples/manual, where first we create and ClusterIssuer and
    Certificate authority, which will be used as a trust anchor.

    apiVersion: cert-manager.io/v1
    kind: ClusterIssuer
    metadata:
      name: selfsigned
    spec:
      selfSigned: {}
    ---
    # root CA - trust anchor
    apiVersion: cert-manager.io/v1
    kind: Certificate
    metadata:
      name: selfsigned
      namespace: linkerd
    spec:
      isCA: true
      commonName: my-selfsigned-ca
      secretName: root-secret
      privateKey:
        algorithm: ECDSA
        size: 256
      issuerRef:
        name: selfsigned
        kind: ClusterIssuer
        group: cert-manager.io

    Now we can configure Issuer that we’ll use for issuing identity certificates. The linkerd-identity-issuer
    intermediate certificate is used for validation reason, since linkerd cli check if it’s valid.

    apiVersion: cert-manager.io/v1
    kind: Issuer
    metadata:
      name: linkerd
      namespace: linkerd
    spec:
      ca:
        secretName: root-secret
    ---
    # issuer cert - intermediate
    apiVersion: cert-manager.io/v1
    kind: Certificate
    metadata:
      name: linkerd-identity-issuer
      namespace: linkerd
    spec:
      secretName: linkerd-identity-issuer
      duration: 48h
      renewBefore: 25h
      issuerRef:
        name: linkerd
        kind: Issuer
      commonName: identity.linkerd.cluster.local
      dnsNames:
        - identity.linkerd.cluster.local
      isCA: true
      privateKey:
        algorithm: ECDSA
      usages:
        - cert sign
        - crl sign
        - server auth
        - client auth

    Trust anchor must be extracted from secrets (root-secret in this case) and stored in configmap
    linkerd-identity-trust-roots under ca-bundle.crt key, in LinkerD’s namespace, then during installation you must
    inform that you are using external certificate authority. Last step is to patch identity controller Deployment to use
    ghcr.io/alkemic/linkerd-cert-manager-identity
    image and add argument --issuer-name=linkerd to specify which Issuer should be used.

    Vault

    A certificate authority that is stored in Vault’s PKI engine is our trust anchor, and you need a copy of it. Below is a
    basic snippet taken from Vault’s documentation.

    apiVersion: cert-manager.io/v1
    kind: Issuer
    metadata:
      name: vault-issuer
      namespace: sandbox
    spec:
      vault:
        path: pki_int/sign/example-dot-com
        server: https://vault.local
        caBundle: <base64 encoded CA Bundle PEM file>
        auth:
          ...

    Rest is similar to self-signed approach, create linkerd-identity-issuer Certificate

    ---
    # issuer cert - intermediate
    apiVersion: cert-manager.io/v1
    kind: Certificate
    metadata:
      name: linkerd-identity-issuer
      namespace: linkerd
    spec:
      secretName: linkerd-identity-issuer
      duration: 48h
      renewBefore: 25h
      issuerRef:
        name: vault-issuer
        kind: Issuer
      commonName: identity.linkerd.cluster.local
      dnsNames:
        - identity.linkerd.cluster.local
      isCA: true
      privateKey:
        algorithm: ECDSA
      usages:
        - cert sign
        - crl sign
        - server auth
        - client auth

    Create configmap linkerd-identity-trust-roots with ca-bundle.crt key with certificate authority mentioned before.

    Arguments

    Old (working in original identity controller):

    • addr – address to serve on (default “:8080”)
    • admin-addr – address of HTTP admin server (default “:9990”)
    • controller-namespace – namespace in which Linkerd is installed
    • enable-pprof – Enable pprof endpoints on the admin server
    • identity-scheme – scheme used for the identity issuer secret format
    • identity-trust-domain – configures the name suffix used for identities
    • kubeconfig – path to kube config
    • log-level – log level, must be one of: panic, fatal, error, warn, info, debug (default “info”)
    • trace-collector – Enables OC Tracing with the specified endpoint as collector

    Ignored:

    • log-format – log format, must be one of: plain, json (default “plain”)
    • identity-clock-skew-allowance – the amount of time to allow for clock skew within a Linkerd cluster
    • identity-issuance-lifetime – the amount of time for which the Identity issuer should certify identity
    • version – print version and exit

    New options:

    • issuer-kind – issuer kind, can be Issuer or ClusterIssuer (default “Issuer”)
    • issuer-name – name of issuer (default “linkerd”)
    • preserve-csr-requests – Do not remove CertificateRequests after csr is created

    Versioning

    Versions in this project are set up to indicate which mainland version of LinkerD they support, e.g. 2.14.1 is fully
    compatabile with LinkerD v2.14.x. The patch version

    Compatability

    At this moment it was tested (and used on production) with LinkerD version 2.13.x and 2.14.x, cert-manager 1.11.x
    and 1.13.x and K8S 1.23 and 1.27.

    Note

    This project is based upon LinkerD’s identity controller and idea from
    istio-csr.

    Visit original content creator repository
    https://github.com/Alkemic/linkerd-cert-manager-identity

  • MochaChip

    MochaChip

    Space Invaders

    AstroDodge

    About

    MochaChip is an emulator/interpreter for the CHIP-8 interpreted language written in Java 17.

    CHIP-8 was created in the 1970s to be run on hobbyist 8-bit microcomputers like the COSMAC VIP. It features a relatively small and easy-to-use instruction set. Original CHIP-8 programs are limited to 4096 bytes, with the first 512 bytes reserved for the interpreter – many extensions to CHIP-8 over the years would greatly extend this limit, as well as add new instructions to the interpreter.

    Writing a CHIP-8 interpreter is often said to be a great way to get familiarized with emulator development and understanding low-level computer science concepts. That was my goal here. If you’re getting started with programming, consider writing your own by getting familiar with CHIP-8’s architecture (see resources below). It’s best to not view anyone else’s code until you’ve tried to get things working yourself.

    Releases and Usage

    Download the latest release, extract the archive, and run the MochaChip JAR file. Requires Java version 17.

    Changelog

    0.2.1-alpha

    • More accurate cycle timing (default 500 CPU cycles per second)
    • Added speed adjustment feature to GUI, providing a range of 1 to 1000 cycles per second.

    0.2.0-alpha

    • Various code refactorings
    • Compatibility improvements – fixed bug instruction implementations. Should be working with nearly all original CHIP-8 programs
    • Added a debugger menu to the GUI. Displays a map of the loaded CHIP-8 memory. Displays a realtime monitor of every register and the stack as values are changed. Shows a pre-fetched list of all instructions found in the CHIP-8 program. Currently running instructions are highlighted yellow. Step mode allows pausing the program and stepping through each instruction one cycle at a time. Breakpoints TBD
    • Implemented pause and stop emulation menu options

    0.1.0-alpha

    • Initial release

    Why Java?

    Mostly just because I like Java, though it may not be the best choice as I’ve come to learn – read more about that below. Regardless, Java offers a lot in terms of very easy to use GUI libraries like Swing. CHIP-8 is an incredibly simple interpreter to emulate, so there’s not a lot of overhead or performance concerns (also, Java’s reputation for poor performance is largely undeserved). Drawing CHIP-8’s output to the screen is done entirely in Swing by essentially painting rectangles for each pixel.

    Current features

    • MochaChip can load any .ch8 file through the GUI. Any CHIP-8 program that makes use of an extension like SUPER-CHIP or xochip will not work (yet).

    • The viewport can be resized by selecting a scale factor in the display menu. The original CHIP-8 is restricted to just one foreground and one background color (monochrome black and white). You select a different color theme from the display menu that may be easier on the eyes.

    • The old COSMAC VIPs supported a 16 digit hexadecimal keyboard. The modern convention for emulating this is to interpret the top-left side of a QWERTY keyboard to correspond to these hex digits, starting at 1 on the keyboard at the top left and V on the bottom-right.

    HEX keypad to QWERTY layout

    Current bugs

    • MochaChip can run most(-ish) original CHIP-8 programs. I’ve found quite a few that do not run yet, likely due to how I’m implementing some instructions and working with some of Java’s quirks. You will find some of the more complex CHIP-8 programs will not run or produce unexpected behavior. Feel free to open an issue if you want to point anything out.

    • Pausing/resuming emulation does not yet behave as expected.

    • The settings menu is a work in progress.

    • Audio beep not yet implemented

    Planned things

    • Fullscreen support
    • CRT shaders??
    • Compatibility for CHIP-8 extensions like SUPER-CHIP and XO-CHIP
    • Configurable keybinds with a per-game adjustment
    • Suite of debug tools

    CHIP-8 Resources

    Want to know more about CHIP-8, or are you writing an emulator of your own? Here’s a ton of helpful links.

    Personal tidbits, techical deatails, and advice for other Java emulator developers

    As of writing, I’ve only been working on this project for a week or so. Thanks to this incredibly well-written guide by Tobias V. Langhoff, I’ve made a lot of progress in a short time. Langhoff’s guide offers no code snippets and challenges you to implement logic yourself based on CHIP-8’s criteria. A big takeaway I got from this is – it literally does not matter which programming language you choose to use for a project like this. I’ve seen a lot of posts on Reddit, mochachip.Stack Overflow and the like by beginners over the years who struggle with choosing a language for whatever project they want to work on. The more time you spend debating on which tools to use is less time you have to write code. This goes beyond the scope of writing a simple interpreter. I feel that modern machines have become fast enough to the point where you don’t have to worry THAT much about performance and overhead. Pick what you’re familiar with and what you enjoy using. For me right now, it’s Java. If you’re writing a 3D renderer from scratch or targeting mobile devices or web browsers, there’s a little more room to debate which tools to use.

    That being said, let’s talk about Java and working with small numbers like bytes. This is long and wordy but my hope is that it helps someone out who may discover this repo looking for solutions when debugging their own CHIP-8 emulator.

    Java is not C, and – oddly enough – it does not support unsigned data types at all. In other words, a Java primitive byte has a signed range of -128 to 127. This is a problem for this project, because CHIP-8 handles all its operations using unsigned byte values (0-255). So if I’m decoding a CHIP-8 instruction that adds a value of 2 to register V5 which holds a value of 255, we expect the result to be 1, because exceeding that signed byte limit of 255 wraps the result back to 0 and adds up from there. This is the expected behavior. In Java, that register (V5) would not hold 255 to begin with. Signed data uses two’s complement to represent negative values in more modern systems. If we look at the value 255 in binary (11111111), the most significant bit (or the leftmost bit) is 1, signaling that this is a negative number. To determine the value of the other 7 bits, we flip all bits (convert 1s to 0s and vice versa), giving us 00000000. Then add 1 to the result for 00000001, or decimal 1. We know this number is negative as indicated by our original most significant bit, so we interpret it as -1. In other words, 255 in CHIP-8 is -1 in Java.

    You can imagine how this creates problems when we try to emulate CHIP-8 instructions. Not only that, but any time math is performed on primitive bytes, Java will upcast those values to 16-bit integers first. For someone like me who knew very little about binary and hexadecimal going into this, it became very confusing when I saw that my outputs were nothing like what I expected them to be.

    The solution to this problem is bitmasking. Any time I want to perform math on my registers, which I know always hold 8-bit unsigned values, I need to make absolutely sure that I know what Java is doing to these values and that I store an 8-bit unsigned value back into my register.

    For example, say you are emulating CHIP-8’s ADD instruction and you have a simple function like this:

    public void addByte(int x, int nn){
    
    }
    

    where int ‘x’ is the specific register you are adding byte value ‘nn’ to. For safety, we cannot simply say

    x += nn
    

    and leave it at that. CHIP-8 doesn’t really care if this operation overflows the register (sends it back to zero and counts up if the range is exceeded). In fact, we want this behavior. That wouldn’t happen here anyway because we are using ints that have a limit far beyond what a byte does. We’re using ints as parameters because, as I said, Java will convert a byte to an int behind the scenes anyway when we do math. We’re simply going to convert everything back to 8-bit unsigned values before storing them back into the register ‘x’. We do it like this:

    public void addByte(int x, int nn){
        int vx = registers.variableRegisters[x] & 0xFF;
        int val = nn & 0xFF;
        int result = (vx + val) & 0xFF;
        registers.variableRegisters[x] = (byte)result;
    }
    

    Let me explain this line-by-line.

        int vx = registers.variableRegisters[x] & 0xFF;
    

    Create a new int ‘vx’ to store the value that is currently held in register x. I have my registers in a class separate from the mochachip.CPU and access them like this. We do a logical AND with the hex value 0xFF or 11111111 in binary. This is called bitmasking, or extracting only the bits you want from a number. In our case, we don’t really know for sure what is hiding in register ‘x’. I know that my registers are a byte[] array, but I don’t necessarily know that Java hasn’t put negative numbers in there at some point. By doing a logical AND with 0xFF, we tell Java to just give us back 8 raw bits from the register and that we aren’t interested in two’s complement and reading the most significant bit as a positive/negative switch.

        int val = nn & 0xFF
    

    We do the exact same thing to ‘nn’, which a value given to us by the CHIP-8 programmer. Again, we don’t know what sort of value we’re getting, so for safety, we restrict it to an unsigned 8-bits by masking with 0xFF.

        int result = (vx + val) & 0xFF;
    

    Add the two values, while again restricting them to 1 byte unsigned.

        registers.variableRegisters[x] = (byte)result;
    

    Now that we know our result is in the range we want, we can cast it down to a byte and put it back into the register ‘ x’. Note that we don’t need to and should not refer to memory addresses or register/array indices with bitmasked unsigned values.

    Believe me when I tell you it took me entirely too long to understand this important quirk of Java and why I was seeing strange and incorrect outputs from my code. Follow this bitmasking rule whenever you are specifically expecting 8-bit, unsigned values from any operation. You can write a function to unsign a value if you want, but I find just adding & 0xFF where I need to is just as well.

    Visit original content creator repository https://github.com/VinceIP/MochaChip
  • LinkAce

    Linkace

    Your self-hosted tool for effortlessly archiving, organizing, and sharing your favorite web links.

    Follow LinkAce on Mastodon Follow LinkAce on X.com Latest Release License

     

    Contents

     

    About LinkAce

    Preview Screenshot

    LinkAce ia a powerful, self-hosted solution for managing your personal link archive. Save articles for later reading, bookmark useful tools, and preserve important web content long-term – all in one place. With a clean, user-friendly interface, you can easily categorize and retrieve your links, and even share collections with friends, family, or coworkers. LinkAce isn’t meant to replace your browser bookmarks, but rather to provide you with a robust, personalized database for curating and managing your online discoveries. Whether you’re a professional, a researcher or simply an avid internet user, you’ll find this tool invaluable for organizing your web resources efficiently and effectively.

    👉 Try the Demo

    Feature Highlights

    • Organize bookmarks with the help of lists and tags.
    • Multi-user support with internal sharing of links, lists or tags.
    • OAuth and OIDC are supported for SSO login to LinkAce.
    • Automated link monitoring informs you when any links become unavailable or were moved.
    • Automated archiving of saved sites via the Internet Archive.
    • A full REST API offers access to all features of LinkAce from other apps and services.
    • LinkAce is also available on Zapier and integrates with over 2500+ applications.
    • An advanced search including different filters and ordering.
    • A bookmarklet to quickly save links from any browser.
    • Save links with automatic title and description generation.
    • LinkAce ships with a light and dark theme, that can be toggled or changes automatically.
    • Links can be private or public, so friends or internet strangers may see your collection.
    • Both private and public lists of links are accessible via RSS feeds.
    • Import and export of bookmarks from and to HTML.
    • Support for complete database and application backups to any AWS S3-compatible storage.

    More screenshots of the app and further details about the features can be found on the LinkAce Website.

     

    ⚙️ Setup

    LinkAce provides multiple ways of installing it on your server. The complete documentation for all installation methods can be found in the wiki.

     

    LinkAce 2.0 was just released! This is a big upgrade to the application. Please read the upgrade guide if you are still using LinkAce 1 and want to use version 2.

     

    💡 Support for LinkAce

    I built LinkAce to solve my own problem, and I now offer my solution and code without charging any money. I spent a lot of my free time building this application, so I won’t offer any free personal support, customization or installation help. If you need help please visit the community discussions and post your issue there.

    ⭐ You can get personal and dedicated support by becoming a supporter on Open Collective, Patreon or Github.

    Our Supporters on Open Collective

    Documentation and Community

    Details about all features and advanced configuration can be found in the project documentation.
    Additionally, you may visit the community discussions to share your ideas, talk with other users or find help for specific problems.

     

    🚧 Contribution

    Translations Codacy Badge GitHub branch check runs

    Please consult the contribution guidelines to start working on LinkAce.

     

    Thanks go to these wonderful people for their contributions:

    List of contributors

     

    LinkAce is a project by Kevin Woblick and Contributors

    Visit original content creator repository https://github.com/Kovah/LinkAce
  • Ziwi

    Ziwi

    Ziwi is a free and open source image viewer, based on Qt5 and OpenCV4, implemented in C++ programming. It can be used to view 8 bits / 12 bits / 16 bits Bayer images, but also supports a variety of common image formats, such as PNG, JPG, TIFF, SVG, etc.

    Ziwi 是一个免费且开源的图像查看器,基于 Qt5 和 OpenCV4,采用 C++ 编程实现。可以用于查看 8 bits / 12 bits / 16 bits 的 Bayer 图像,同时也支持各种常见的图片格式,例如 PNG、JPG、TIFF、SVG等。

    Environment Requirements / 环境依赖

    • C++ 17 (filesystem)
    • OpenCV 4.7.0
    • Qt 5.15.9
    • GCC / Clang
    • Linux (ArchLinux / Manjaro) / Window10

    Evnironment Install / 环境安装

    • OpenCV 4.7.0

          # step 1
          wget https://github.com/opencv/opencv/archive/4.7.0.zip
      
          # step 2
          unzip 4.7.0.zip
      
          # step 3
          cd opencv-4.7.0
          mkdir build
          cd build
      
          # step 4
          cmake -DCMAKE_BUILD_TYPE=RELEASE \
              -DCMAKE_INSTALL_PREFIX=./install \
              -DBUILD_SHARED_LIBS=ON \
              -DCMAKE_INSTALL_LIBDIR=lib64 \
              -DOPENCV_FORCE_3RDPARTY_BUILD=ON \
              -DBUILD_DOCS=OFF \
              -DBUILD_EXAMPLES=OFF \
              -DWITH_IPP=OFF \
              -DBUILD_TESTS=OFF ..
      
          # step 5
          make -j 16
      
          # step 6 (optional)
          make install
    • Qt 5.15.9

          Manjaro (Archlinux) 自带,安装省略

    Screenshots / 截图

    Todo-List

    • 添加普通对象查看功能
    • 针对 raw 图像,添加参数记忆功能
    • Update the title including the status bar and the MainWindow tittle
    Visit original content creator repository https://github.com/lutianen/Ziwi
  • reversingBits

    Reversing Bits Cheatsheets

    Welcome to the Reversing Bits Cheatsheets repository! This collection provides comprehensive guides on various tools essential for assembly programming, reverse engineering, and binary analysis. Each cheatsheet offers installation instructions, usage examples, and advanced tips for different operating systems.

    Website: https://mohitmishra786.github.io/reversingBits/

    Tools Included

    Assembly & Basic Analysis

    • NASM: A popular assembler for the x86 and x86-64 architectures.
    • GAS: GNU Assembler, part of the GNU Binutils project, used for assembling AT&T syntax assembly.
    • objdump: A powerful tool for displaying information about object files.
    • Hexdump: Used to display or dump binary data in hexadecimal format.
    • strings: Extracts printable strings from files, useful for quick analysis.
    • file: Determines file type by examining its contents.
    • nm: Lists symbols from object files.
    • readelf: Displays information about ELF (Executable and Linkable Format) files.

    Debuggers & Dynamic Analysis

    • GDB: The GNU Debugger for debugging programs at the source or assembly level.
    • OllyDbg: A 32-bit assembler level debugger for Windows.
    • WinDbg: Microsoft’s debugger for Windows applications.
    • QEMU: Emulator and virtualizer for cross-platform analysis.
    • Valgrind: Tool suite for debugging and profiling Linux programs.
    • Unicorn: Lightweight, multi-platform CPU emulator framework.

    Disassemblers & Decompilers

    • IDA Pro: Industry-standard disassembler and debugger.
    • Ghidra: NSA’s software reverse engineering suite.
    • Binary Ninja: Modern reverse engineering platform.
    • Hopper: Reverse engineering tool for macOS and Linux.
    • RetDec: Retargetable machine-code decompiler.
    • Radare2: Complete framework for reverse-engineering.
    • Rizin: Fork of radare2 with enhanced features.

    Binary Analysis Frameworks

    • Angr: Python framework for binary analysis.
    • BAP: Binary Analysis Platform for reverse engineering.
    • Capstone: Lightweight multi-architecture disassembly framework.
    • Dyninst: Binary instrumentation and analysis library.
    • Frida: Dynamic instrumentation toolkit.
    • PIN: Intel’s dynamic binary instrumentation framework.
    • Binary Ninja Cloud: Cloud-based reverse engineering platform by Vector 35.
    • Cutter: A free and open-source reverse engineering platform based on the QEMU emulator and the Capstone disassembly engine.
    • Binary Analysis Tool (BAT): A framework for automated binary code analysis, providing a unified interface for various binary analysis tools.
    • Miasm: A reverse engineering framework written in Python, focused on advanced binary analysis and code instrumentation.
    • Triton: A dynamic binary analysis framework based on PIN, providing a powerful constraint solver for symbolic execution.
    • PEDA: Python Exploit Development Assistance for GDB, enhancing the GDB debugger with additional functionality for reverse engineering.
    • .NET IL Viewer: A tool for analyzing .NET assemblies, allowing you to view the disassembled code and metadata.
    • Snowman: A decompiler for x86/x64 binaries, providing a graphical user interface and support for multiple file formats.

    Malware Analysis & Security

    • YARA: Pattern matching tool for malware analysis.
    • Zynamics: Binary difference analysis tools.
    • Intel XED: X86 encoder decoder library.
    • Spike: Network protocol fuzzer.
    • FrEEdom: Binary analysis framework.
    • Diaphora: Advanced binary diffing tool for IDA Pro.

    Star History

    Star History Chart

    How to Use

    • Installation: Follow the OS-specific instructions in each cheatsheet for tool installation.
    • Usage: Each file contains usage examples, common commands, and advanced tips.
    • Contributing: If you have improvements or additional tools to add, please fork the repository, make your changes, and submit a pull request.

    License

    This repository is licensed under the MIT License – see the LICENSE file for details.

    Acknowledgements

    • Thanks to the developers and communities behind these tools for their invaluable resources.
    • Contributions are always appreciated! Check the CONTRIBUTING.md for guidelines on how to contribute.
    Visit original content creator repository https://github.com/mohitmishra786/reversingBits
  • cssGuden

    🎉 cssGuden autoChic CSS: The Ultimate CSS Library for Any Web Application 🚀

    🤔 Design Process

    When designing autoChic CSS, we asked ourselves:

    👥 Who is our target audience?

    Web developers, designers, and anyone looking to create a visually appealing web application 📈

    🤕 What are their needs and pain points?

    A easy-to-use CSS library that can help them create a modern and responsive design without having to start from scratch 🔄

    📊 What are the current trends and best practices in web design?

    Modular and flexible design, responsive design, and a focus on user experience 📈

    💡 How can we create a CSS library that meets these needs and exceeds expectations?

    By providing a comprehensive and modular CSS architecture, pre-designed components and layouts, and a focus on ease of use and customization 🔩

    📜 Design Principles

    • Create a visually appealing and modern design that enhances the user experience 🌈
    • Use a modular and flexible approach to make the library easy to customize and extend 🤝
    • Focus on simplicity and ease of use, while still providing advanced features and functionality 📈

    🎁 Features

    • Responsive design for various screen sizes and devices 📱
    • Modular CSS architecture for easy customization 🔧
    • Pre-designed components and layouts for common use cases 📈
    • Easy to use and integrate into your web projects 💻

    🚀 Getting Started

    1. Link the autoChic CSS file to your HTML document 📁
    2. Start building your web page using the provided components and layouts 🏗️
    3. Customize the styles to fit your needs using the modular CSS architecture 🔩

    👥 Contributing to autoChic CSS

    If you’d like to contribute to the library, please follow these guidelines:

    • Fork the repository and create a new branch for your changes 📈
    • Follow the design principles and coding standards outlined above 📜
    • Write clean, modular, and well-documented code 💻
    • Submit a pull request for review and feedback 📣

    📜 License and Attribution

    autoChic CSS is licensed under the MIT License 📜

    Attribution is not required but appreciated. If you use autoChic CSS in your project, please consider giving credit to the original authors 🙏

    📈 Version History

    v1.0: Initial release of autoChic CSS aka cssGuden 🎉

    Party Toad Sticker 2

    Party Toad Sticker 17

    Party Toad Sticker 9

    Party Toad Sticker 3

    Party Toad Sticker 14

    Party Toad Sticker 11

    Visit original content creator repository https://github.com/essingen123/cssGuden
  • DeePiCt

    DeePiCt

    Source code for the paper:

    Convolutional networks for supervised mining of molecular patterns within cellular context. Nature Methods, pp.1-11 (2023)

    de Teresa, I.*, Goetz S.K.*, Mattausch, A., Stojanovska, F., Zimmerli C., Toro-Nahuelpan M., Cheng, D.W.C., Tollervey, F. , Pape, C., Beck, M., Diz-Muñoz, A., Kreshuk, A., Mahamid, J. and Zaugg, J.

    Table of Contents

    1. Introduction
    2. Installation
    3. How to run
    4. Colab Notebooks
    5. Trained Models
    6. Useful_Scripts

    1. Introduction

    With improved instrumentation and sample preparation protocols, a large number of high-quality cryo-ET images are rapidly being generated in laboratories, opening the possibility to conduct high-throughput studies in cryo-ET. However, due to the crowded nature of the cellular environment together with image acquisition limitations, data mining and image analysis of cryo-ET tomograms remains one of the most important bottlenecks in the field. We present DeePiCt (Deep Picker in Context), a deep-learning based pipeline to achieve structure segmentation and particle localization in cryo-electron tomography. DeePiCt combines two dedicated convolutional networks: a 2D CNN for segmentation of cellular compartments (e.g. organelles or cytosol), and a 3D CNN for particle localization and structure segmentation.

    Segmentation of fatty-acid synthases (FAS), ribosomes and membranes in a cryo-tomogram from S.pombe

    Figure 1 | DeePiCt’s Workflow for Segmentation of cellular structures. a. Both the 2D CNN and the 3D CNN for DeePiCt workflow are variations of the U-Net architecture (Ronnenberg et al., 2015). b. An example of DeePict’s workflow for the segmentation of membranes and the localization of fatty-acid synthases (FAS) and cytosolic ribosomes in a S. pombe cryo-tomogram.

    2. Installation

    Both 2D and 3D CNN pipelines require a conda installation, and are run via the Snakemake workflow management system.

    Requirements and conda environment

    Package Installation (miniconda, Pytorch and Keras).

    Miniconda

    Download the latest miniconda3 release, according to your OS and processor (modify the Miniconda3-latest-Linux-x86_64.sh file according to the instructions available here):

    cd foldertodownload
    wget https://repo.anaconda.com/miniconda/Miniconda3-latest-Linux-x86_64.sh
    bash Miniconda3-latest-Linux-x86_64.sh

    during installation, you’ll be asked where to install miniconda. Select a folder with large capacity:

    /path/to/folder/with/large/capacity/miniconda3

    Virtual environment

    Create a basic conda environment with Snakemake and pandas

    The necessary virtual environment for the installation needs only snakemake and pandas

    conda install -n base -c conda-forge mamba
    conda activate base
    mamba create -c conda-forge -c bioconda -n snakemake snakemake==5.13.0 python=3.7
    conda activate snakemake
    conda install pandas

    Install Pytorch:

    conda install -c pytorch pytorch-gpu torchvision

    Install Keras:

    conda install -c conda-forge keras-gpu=2.3.1
    

    Clone this repository

    cd /folder/where/the/repository/will/be/cloned
    git clone https://github.com/ZauggGroup/DeePiCt.git

    3. How to run

    Go to the folder where you plan to run the experiments. Create a configuration file -with the structure given in the examples (see each of the 2d_cnn/config.yaml or 3d_cnn/config.yaml files)-. Run the pipeline by:

    • 2D CNN pipeline: bash /path/to/2d_cnn/deploy_cluster.sh /path/to/config.yaml

    • 3D CNN pipeline: bash /path/to/3d_cnn/deploy_cluster.sh /path/to/config.yaml

    (notice that, to run locally the deploy_cluster.sh script can be exchanged by deploy_local.sh).

    Configuration

    We refer to the 2d_cnn/README.md and 3d_cnn/README.md files for corresponding specifications.

    4. Colab Notebooks

    We provide two notebooks to try out the prediction of 2D and 3D CNN trained models on one tomogram. The the spectrum matching filter is not included in the notebooks. This step should be done beforehand, following the instructions here.

    To predict cytosol or organelles, you can use the 2D trained models:

    DeePiCt_predict2d.ipynb Open In Colab.

    To predict ribosome, membrane, microtubules or FAS, you can use the 3D trained models:

    DeePiCt_predict3d.ipynb Open In Colab.

    5. Trained Models

    Trained models are available here. All models were trained with cryo-ET data (4-times binned, unbinned pixel size 3.37 A) pre-processed using the spectrum matching filter with spectrum_TS_001.tsv.

    6. Additional Scripts

    A number of useful scripts can be found in the folder additional_scripts/. python additional_scripts/<script_name> --help to learn how to use it.

    Below the list.

    • motl2sph_mask.py

    Script that converts coordinate lists into spherical masks, to produce training data for the 3D CNN. Example:

    python DeePiCt/additional_scripts/motl2sph_mask.py -r 8 -motl test_motl.csv -o \
    Downloads/test_mask.mrc -shape 900 900 500 -value 1
    
    • elliptical_distance_constraint.py

    Script to merge several lists of coordinates into a single one, avoiding duplicates and imposing elliptical distance constrains to respect (possibly different) minimal distance between points along axis x, y, z. The elliptic coefficients a b and c represent the corresponding minimum distance in voxels. Example:

    python DeePiCt/additional_scripts/elliptical_distance_constraint.py --abc 9 9 15 -f test_motl1.csv test_motl2.csv \
    -o merged_list.csv 
    
    Visit original content creator repository https://github.com/ZauggGroup/DeePiCt
  • DeePiCt

    DeePiCt

    Source code for the paper:

    Convolutional networks for supervised mining of molecular patterns within cellular context. Nature Methods, pp.1-11 (2023)

    de Teresa, I.*, Goetz S.K.*, Mattausch, A., Stojanovska, F., Zimmerli C., Toro-Nahuelpan M., Cheng, D.W.C., Tollervey, F. , Pape, C., Beck, M., Diz-Muñoz, A., Kreshuk, A., Mahamid, J. and Zaugg, J.

    Table of Contents

    1. Introduction
    2. Installation
    3. How to run
    4. Colab Notebooks
    5. Trained Models
    6. Useful_Scripts

    1. Introduction

    With improved instrumentation and sample preparation protocols, a large number of high-quality cryo-ET images are rapidly being generated in laboratories, opening the possibility to conduct high-throughput studies in cryo-ET. However, due to the crowded nature of the cellular environment together with image acquisition limitations, data mining and image analysis of cryo-ET tomograms remains one of the most important bottlenecks in the field. We present DeePiCt (Deep Picker in Context), a deep-learning based pipeline to achieve structure segmentation and particle localization in cryo-electron tomography. DeePiCt combines two dedicated convolutional networks: a 2D CNN for segmentation of cellular compartments (e.g. organelles or cytosol), and a 3D CNN for particle localization and structure segmentation.

    Segmentation of fatty-acid synthases (FAS), ribosomes and membranes in a cryo-tomogram from S.pombe

    Figure 1 | DeePiCt’s Workflow for Segmentation of cellular structures. a. Both the 2D CNN and the 3D CNN for DeePiCt workflow are variations of the U-Net architecture (Ronnenberg et al., 2015). b. An example of DeePict’s workflow for the segmentation of membranes and the localization of fatty-acid synthases (FAS) and cytosolic ribosomes in a S. pombe cryo-tomogram.

    2. Installation

    Both 2D and 3D CNN pipelines require a conda installation, and are run via the Snakemake workflow management system.

    Requirements and conda environment

    Package Installation (miniconda, Pytorch and Keras).

    Miniconda

    Download the latest miniconda3 release, according to your OS and processor (modify the Miniconda3-latest-Linux-x86_64.sh file according to the instructions available here):

    cd foldertodownload
    wget https://repo.anaconda.com/miniconda/Miniconda3-latest-Linux-x86_64.sh
    bash Miniconda3-latest-Linux-x86_64.sh

    during installation, you’ll be asked where to install miniconda. Select a folder with large capacity:

    /path/to/folder/with/large/capacity/miniconda3

    Virtual environment

    Create a basic conda environment with Snakemake and pandas

    The necessary virtual environment for the installation needs only snakemake and pandas

    conda install -n base -c conda-forge mamba
    conda activate base
    mamba create -c conda-forge -c bioconda -n snakemake snakemake==5.13.0 python=3.7
    conda activate snakemake
    conda install pandas

    Install Pytorch:

    conda install -c pytorch pytorch-gpu torchvision

    Install Keras:

    conda install -c conda-forge keras-gpu=2.3.1
    

    Clone this repository

    cd /folder/where/the/repository/will/be/cloned
    git clone https://github.com/ZauggGroup/DeePiCt.git

    3. How to run

    Go to the folder where you plan to run the experiments. Create a configuration file -with the structure given in the examples (see each of the 2d_cnn/config.yaml or 3d_cnn/config.yaml files)-. Run the pipeline by:

    • 2D CNN pipeline: bash /path/to/2d_cnn/deploy_cluster.sh /path/to/config.yaml

    • 3D CNN pipeline: bash /path/to/3d_cnn/deploy_cluster.sh /path/to/config.yaml

    (notice that, to run locally the deploy_cluster.sh script can be exchanged by deploy_local.sh).

    Configuration

    We refer to the 2d_cnn/README.md and 3d_cnn/README.md files for corresponding specifications.

    4. Colab Notebooks

    We provide two notebooks to try out the prediction of 2D and 3D CNN trained models on one tomogram. The the spectrum matching filter is not included in the notebooks. This step should be done beforehand, following the instructions here.

    To predict cytosol or organelles, you can use the 2D trained models:

    DeePiCt_predict2d.ipynb Open In Colab.

    To predict ribosome, membrane, microtubules or FAS, you can use the 3D trained models:

    DeePiCt_predict3d.ipynb Open In Colab.

    5. Trained Models

    Trained models are available here. All models were trained with cryo-ET data (4-times binned, unbinned pixel size 3.37 A) pre-processed using the spectrum matching filter with spectrum_TS_001.tsv.

    6. Additional Scripts

    A number of useful scripts can be found in the folder additional_scripts/. python additional_scripts/<script_name> --help to learn how to use it.

    Below the list.

    • motl2sph_mask.py

    Script that converts coordinate lists into spherical masks, to produce training data for the 3D CNN. Example:

    python DeePiCt/additional_scripts/motl2sph_mask.py -r 8 -motl test_motl.csv -o \
    Downloads/test_mask.mrc -shape 900 900 500 -value 1
    
    • elliptical_distance_constraint.py

    Script to merge several lists of coordinates into a single one, avoiding duplicates and imposing elliptical distance constrains to respect (possibly different) minimal distance between points along axis x, y, z. The elliptic coefficients a b and c represent the corresponding minimum distance in voxels. Example:

    python DeePiCt/additional_scripts/elliptical_distance_constraint.py --abc 9 9 15 -f test_motl1.csv test_motl2.csv \
    -o merged_list.csv 
    
    Visit original content creator repository https://github.com/ZauggGroup/DeePiCt