Author Archives: Anil Kumar Pugalia

Anil Kumar Pugalia

About Anil Kumar Pugalia

The author is a hobbyist in open source hardware and software, with a passion for mathematics, and philosopher in thoughts. A gold medallist from the Indian Institute of Science, Linux, mathematics and knowledge sharing are few of his passions. He experiments with Linux and embedded systems to share his learnings through his weekend workshops. Learn more about him and his experiments at https://sysplay.in.

Controlling Shell Commands through C

With the ever growing presence of embedded systems, it has become a very common requirement to do some tasks through shell commands while doing others through C in an embedded system. Very often people achieve this by using system() C library function, which is known for its inefficiencies and limitations. So, is there a better way to achieve it? Yes. The main C (commander) program may spawn a master shell script process, which would then keep on accepting & executing shell commands from the commander program. Here are the two components (a main C program and a master script) of the framework, put out:

/* File: commander.c */

#include <stdio.h>
#include <unistd.h>
#include <string.h>
#include <stdlib.h>

int main(char argc, char *argv[])
{
	int pfds[2]; // 0 is read end, 1 is write end
	int wfd;
	pid_t pid;
	char cmd[100];
	int cmdlen;
	int stop, status;

	if (pipe(pfds) == -1)
	{
		perror(argv[0]);
		return 1;
	}

	pid = fork();
	if (pid == -1)
	{
		perror(argv[0]);
		return 2;
	}
	else if (pid != 0) // Parent
	{
		close(pfds[0]); // Close the read end of the pipe
		wfd = pfds[1];
		// Continue doing other stuff, e.g.
		// take commands from user and pass onto the master script
		stop = 0;
		do
		{
			printf("Cmd (type \"done\" to exit): ");
			if ((status = scanf("%[^\n]", cmd)) <= 0)
			{
				getchar(); // Remove the \n
				continue;
			}
			getchar(); // Remove the \n
			if (strcmp(cmd, "done") == 0)
			{
				stop = 1;
			}
			else
			{
				cmdlen = strlen(cmd);
				cmd[cmdlen++] = '\n';
				//cmd[cmdlen] = '\0';
				// Pass on the command to master script
				write(wfd, cmd, cmdlen);
			}
		}
		while (!stop);
		close(wfd);
	}
	else
	{
		close(pfds[1]); // Close the write end of the pipe
		dup2(pfds[0], 0); // Make stdin the read end of the pipe
		if (execl("./master_script.sh", "master_script.sh", (char *)(NULL))
				== -1)
		{
			perror("Master script process spawn failed");
		}
	}

	return 0;
}

# File: master_script.sh

#!/bin/bash

while read cmd
do
	#echo "Running ${cmd} ..."
	${cmd}
done

One may compile the commander.c and try it as follows:

$ gcc commander.c -o commander
$ ./commander

This approach becomes even more powerful, when the shell command execution is pretty often. Also note that the main thread need not block for the command execution to complete. Though, it can be customized to block as well, if required. And many more customizations can be achieved as desired, e.g. getting the command output back into the C program instead of stdout, redirecting the command error into some log file instead of stderr, getting the command status – to list a few. Post your comments below to discuss any customizations of your interest.

www.pdf24.org    Send article as PDF   

Types of Shell Commands

Everyone, who has been a Linux user, must have typed various commands on shell, or the so-called shell commands, knowingly or unknowingly – at least the ubiquitous command “ls”. However, have one wondered, how one gets these commands?

A command which one types on shell is broadly available from one of the following three places:

  • An executable from some standard path in the file system
  • A built-in of the shell, available from the shell’s binary itself
  • An alias or function created by the shell

“ls” – every Linux users’ typical first command, basically comes from the “ls” executable located under /bin. And so are many more commands. All such commands come from one of the directories like /bin, /usr/bin, /sbin, etc. The complete list of such directories can be obtained from the PATH environment variable by typing:

$ echo ${PATH}

Typing this as a normal user would show the list of directories for commands available to a normal user. And typing it as a root user would show the list of directories for commands available to root user. Moreover, one could add directories to the corresponding PATH variable as well, using “export” command.

Given a command available from executable, e.g. “ls”, one may find its directory using the “which” command as follows:

$ which ls

What about the “which” command itself? Try:

$ which which

What about the “cd” command? Try:

$ which cd

And you’ll get a message saying no cd in the path. But then, cd still works, right? Just type:

$ type cd

And it would show that it is a shell built-in – the second type of the command types. And, it makes sense to be a built-in as well, as meaning of current working directory (the concept around “cd”) is relevant only with respect to a shell. What about the “type” command, itself? Any guess? Try:

$ type type

Yes, as expected, this is also a shell built-in. What about the “which” command? Try:

$ type which

And it shows, as expected, that it is an executable being picked up from a corresponding hashed directory. What about the “ls” command? Try:

$ type ls

Ouch! This is not as expected. It doesn’t show “ls” as an executable from a corresponding directory, but rather that “ls” is aliased to some string like “ls -F –color=auto” or so. Why is it so? Because the “ls” command we type, is actually an alias to the “ls” executable (with the specific colour options) – the third type of command types. And, that’s why we see coloured listing by default when we type “ls”. One can create aliases using the “alias” command, which itself is a shell built-in, as expected. And aliases override commands from the actual executables.

What if one wants to bypass the alias and directly call the executable? The command with the complete executable path, e.g. “/bin/ls”, may be invoked, or it may be backslashed, as follows:

$ \ls

Observe the non-coloured output of the original “ls” executable.

So to conclude, “type” command gives the actual type (out of the three types) for any command we type on shell. “which” command is to figure out the directory of its corresponding executable, if any. And “alias” command would list out all the aliases currently defined under the corresponding shell.

On a side note, “man” command typically provides help on the shell’s executable commands only. For a built-in command, it may just open up the man page of the shell itself. So, for specific help on built-in commands, you may use the “help” command, e.g.:

$ help cd

Parting question: Which type of command is “help”? Try out the various commands on it and have fun exploring the types of shell commands.

www.pdf24.org    Send article as PDF   

Running DOS programs on Linux

Lot of people from DOS days must have played & enjoyed those DOS games. And today those don’t always play or not play straight away on Windows. And so, the newer generation might have heard about them, but never got a first hand experience. Moreover, today there are more Linux users than ever. Ya ya, on Linux, people have been using wine for Windows emulation of the executables, but not always straight forward for the graphics part. So, for the DOS game lovers, or for that matter for executing any DOS program to get that antique feeling, there is a simpler elegant way. It is using dosbox, which is available in various Linux distros. Just install it as a package. And run with the command dosbox.

And DOS would so-called boot and give the DOS prompt with Z:\ drive as system drive. Then, it is all DOS in that dosbox window. Now, how to run external DOS executables? Assuming they are available in some folder under Linux, say ~/DOS. That could be mounted in the dosbox, by the following command:

Z:\>mount c ~/DOS

With this, the ~/DOS folder from Linux is mounted as C:\ drive in dosbox. And now the various DOS things are applicable to it. One may switch to it by typing the drive, as follows:

Z:\>C:

If it has game executables, compiler executables, … from DOS days, those can be run by just typing them as the executable with complete path, as was to be done in those DOS days. Just remember that the directory separator slash used in DOS is backslash (\) like in Windows and unlike in Linux. And front slash (/) is used for command options.

To get a list and help on the default available (DOS) commands, type:

C:\>help /all

And finally to exit from the dosbox:

C:\>exit

 

www.pdf24.org    Send article as PDF   

Detection of I2C Devices

While exploring new I2C devices or bringing up I2C devices on Linux, and especially when things are not working, one of the common doubts which linger around is, is there a problem in hardware or software. In fact, this is a common doubt for any type of device, why only I2C. And the easiest way ahead for all such standard protocols is to have a user space tool / application, which can scan the devices without depending on any device specific driver. Assumption here is that the corresponding bus driver is in place. Depending on the protocol, the tools may be different. So, here our focus is on I2C.

i2cdetect is a powerful and simple tool for figuring out I2C devices. If an I2C device is detectable with i2cdetect, it means hardware is fine and if not detectable means some issue with the hardware. And the debugging could proceed accordingly. Executing i2cdetect may need root privileges and can be used as follows:

List the I2C buses available:

# i2cdetect -l

Say, 0 & 1 are available. Then, each bus could be scanned to see what all device addresses exist on each bus. It is assumed that we know the device addresses of our devices. Here’s how to scan say bus 0:

# i2cdetect -y 0

If this doesn’t work, issuing an error, you may add a “-r” option to use the SMBus commands, which should work.

# i2cdetect -y -r 0

Output of the working command will be an array of all device address locations on that bus, with “- -” or “UU” or a device address, as its value.

“- -” indicates address was probed but no device responded. So, if you are expecting a device at some address and got “- -“, it means either it is not on this bus, or the device is not getting detected because of some hardware issue, which could be hardware lines not connected properly, or voltage supply issue, or something else.

“UU” indicates that probing of this address was skipped because the address is currently in use by a driver. This is strong indication that the device is present, and highly likely that the driver is also in place.

Device Address in hexadecimal indicates that the device has been detected.

In both the above cases, hardware side of the device & its connections are all fine. And if it is still not working as expected in case it is showing “UU”, it is high chances that the driver may need tuning / modification. Just to be doubly sure about that, you may verify it by changing the device with an another one, if possible.

And for the case showing the device address in hexadecimal, either a software driver is needed for it or it may be accessed using some user space accessing mechanism.

Note: i2cdetect is part of the i2c-tools package. So, if it is not available on the corresponding Linux system, the i2c-tools package may need to be installed.

www.pdf24.org    Send article as PDF   

Running Commands on VirtualBox from Outside

In today’s world of multi operating systems (OSes), some things get done better / easier in one OS, whereas some in other. Now, if all types of tasks are to be done on the corresponding OSes, there are various possibilities. Two from those are:

  • Have multiple systems with the various OSes
  • Have a system with one of the OSes (referred to as host OS) and VirtualBoxes on it for the other OSes (referred to as guest OSes)

From ease of use perspective, the second one is preferable. In either case, if automation of all the tasks is required the following are the most common steps:

  • Write a script on one of the OSes, preferably bash script on Linux to automate its local tasks
  • Invoke / Do the tasks on the other OSes in the same script, using ssh with the command to the corresponding OS

For the above steps to work, a minimal network connection is expected between the different OSes. In case of multiple systems, physical network connection is required. However, with VirtualBoxes, even the network could be virtual between the VirtualBoxes.

In case of VirtualBoxes, a networkless solution is also possible. One can actually run the commands on a VirtualBox based guest OS using commands on the host OS. The key to that is the command VBoxManage which gets installed alongwith the VirtualBox on the host OS. To be specific, it is the option guestcontrol of the VBoxManage. Invoke it as follows for further list of options:

$ VBoxManage help guestcontrol

Assuming a VirtualBox running with the name “Ubuntu”, username “test”, password “testpwd”, the following are a few examples of what can be done:

$ VBoxManage guestcontrol Ubuntu --username test --password testpwd mkdir /home/test/xyz
$ VBoxManage guestcontrol Ubuntu --username test --password testpwd mv /home/test/xyz /home/test/abc
$ VBoxManage guestcontrol Ubuntu --username test --password testpwd rmdir /home/test/abc

There are few more commands, apart from the above, which can be directly used. But what about the whole plethora of commands on the guest OS? For an access to all the commands on the guest OS, they may be invoked using their complete path as follows:

$ VBoxManage guestcontrol Ubuntu --username test --password testpwd run --exe /bin/ls

The above is an example of invoking “ls”, but this shows the content of the root (/) directory on the guest OS. What if parameters are to be passed to the command? Then, do as follows:

$ VBoxManage guestcontrol Ubuntu --username test --password testpwd run --exe /bin/ls -- ls /home/test

This would list the contents of /home/test on the guest OS. Note that after – – , argv[0], argv[1], … has to be passed. Hence, the first one is “ls” itself.

Want to shorten giving the long command again and again, on a Linux host. Then, define a variable, say

$ export UBUNTU="VBoxManage guestcontrol Ubuntu --username test --password testpwd run --exe"
$ ${UBUNTU} /bin/ls -- ls /home/test

In case of doing it through a script, the variable may be defined without the export.

Now, if the output from the command is not desired, one may use “start” instead of “run”, e.g.

$ VBoxManage guestcontrol Ubuntu --username test --password testpwd start --exe /bin/ls -- ls /home/test

In all the above examples, it has been assumed that the VirtualBox Ubuntu was already running. In fact, if it was not, it also may be started / booted, paused, resumed, stopped / shutdown from the command line as well. This, giving a full control on automation. Here are the corresponding commands:

$ VBoxManage startvm Ubuntu --type gui # The usual graphical Start
$ VBoxManage startvm Ubuntu --type headless # The hidden background Start
$ VBoxManage controlvm Ubuntu pause # Pause the VM
$ VBoxManage controlvm Ubuntu resume # Resume the VM
$ VBoxManage controlvm Ubuntu poweroff # Poweroff the VM
$ VBoxManage controlvm Ubuntu reset # Reset / Hard Restart the VM in case of hang or so
www.pdf24.org    Send article as PDF   

Playing with ALSA loopback devices

Looping back is always an interesting thing to play with. It comes with its own set of applications, ranging from testing & debugging to replicating & integration. It has been used in various fields including hardware and software. At hardware level, we often short the Rx (receive) & Tx (transmit) lines to do the loopback in devices like serial, network, etc. In software, we do it using pipes, files, etc. However, an even more interesting crop is the concept of virtual devices doing loopback. We had talked about virtual video loopback devices in the previous article “Simultaneous Access to Single Camera“. Similarly, we can have virtual audio loopback devices.

snd-aloop is the kernel module for setting up virtual audio loopback devices.

$ sudo modprobe snd-aloop

creates two devices 0 & 1 under a new “Loopback” card for both playback & capturing, as shown below, respectively:

Playback Devices

Capture Devices

In the above images, the card 2 is the loopback card. It may vary depending on which is the next free available card number. Moreover, each of the two devices under it, has 8 subdevices, which would be accessed using the format hw:c,d,s, where c stands for card number, d for device number, and s for subdevice number, e.g. hw:2,0,0

Now in this, whatever audio is played back into hw:2,0,s could be captured from hw:2,1,s and viceversa, s ranging from 0 to 7. For example, audio played back into hw:2,0,4 could be captured from hw:2,1,4; audio played back into hw:2,1,7 could be captured from hw:2,0,7 – these are what are the loopbacks. A simple experiment could demonstrate the same.

Start recording audio from hw:2,1,4:

$ arecord -D hw:2,1,4 -f S16_LE -c 2 -r 48000 recorded.wav

Note that providing the sample format, channel count, frame rate in recording ensures that playback picks up the same settings – this is because there is no real hardware underneath it is just a virtual loopback connection.

And in parallel (from another shell) play an audio from audio.wav into hw:2,0,4:

$ aplay -D hw:2,0,4 audio.wav

And you’d find that recorded audio contains the played one – a loopback in action. You may play the recorded audio as follows:

$ aplay recorded.wav

This would play on your system’s default speaker.

Also, note that there may be problem in just playing any audio.wav file because of the mismatched audio format etc support. For that, just record a new wave file with your speech using the following command:

$ arecord -f S16_LE -c 2 -r 48000 audio.wav

This would record from your system’s default mic.

Interestingly, audio loopback could also be achieved in user space using alsaloop from alsa-utils package. Here is a demo of the same. From the output of aplay -l, hw:1,0 is the analog out (speaker). Note that hw:1,0 is same as hw:1,0,0. Find the equivalent on your system. And, now let’s loopback the virtual audio capture device hw:2,1,4 to this:

alsaloop -C hw:2,1,4 -P hw:1,0

On another shell, do the earlier playing:

$ aplay -D hw:2,0,4 audio.wav

This time you should be able to hear the audio.wav directly through system’s default speaker – again a loopback in action – rather two loopbacks in action: audio.wav -> hw:2,0,4 -> (loopback through snd-aloop driver) -> hw:2,1,4 -> (loopback through alsaloop app) -> hw:1,0 -> heard on speaker.

www.pdf24.org    Send article as PDF   

Simultaneous Access to Single Camera

There are many devices, accessing which is meaningful, only when accessed by one user at a time. Examples include a serial port, a camera, … To illustrate it further, think through as what would happen if two applications (aka users) read from the same serial port simultaneously. Some data would go to one user and some to the other, making both data meaningless. That is why, in such cases it is recommended to use only one application at a time for that particular device. A similar scenario would happen even with camera capturing. To avoid such undesired results, many a times the corresponding device framework marks the device busy if it is being used – thus ensuring that only one application is using it at a time.

In general, this mutually exclusive device usage is fine. But, what if two applications want to access the same data simultaneously. That is a problem. But even if the device is allowed to be accessed simultaneously, it does not solve the problem, as the data would get split, unlike in the storage devices like EEPROM, hard disk. One way to solve such a problem is by mirroring the data, and the mutually exclusiveness also would not be hampered. For that, we would need an intermediate application which would actually read from the device and then mirror the read data into as many virtual devices as needed. With such an arrangement, many applications can actually get the same camera feed, say for different processings.

Here is the outline of how to achieve this for a v4l2 (video for linux 2) compatible camera:

+ Download the v4l2loopback driver from https://github.com/umlaeute/v4l2loopback
+ Compile it using the corresponding kernel version, where the camera is attached. On an x86 system, typically typing make should do.
+ Load the v4l2loopback driver (v4l2loopback.ko file) w/ appropriate options. A typical way:

$ sudo insmod v4l2loopback.ko devices=2

Assuming an existing /dev/video0 for the camera, this would create two loopback video device file entries video1 & vide2. Refer to https://github.com/umlaeute/v4l2loopback/README.md for more options. Whatever is fed into these device files, comes out as their output.

+ Feeding a video test source (Ancient Doordarshan Screen) into the loopback video device files, using gst-launch application (just for testing):

$ gst-launch-1.0 videotestsrc ! tee name=t ! queue ! v4l2sink device=/dev/video1 t. ! queue ! v4l2sink device=/dev/video2

+ Open cheese or any such application to view video test screen from the video1 & video2 device files

+ Time to mirror the video0 stream to video1 & video2. Use the gst-launch application, as follows:

$ gst-launch-1.0 v4l2src device=/dev/video0 ! tee name=t ! queue ! v4l2sink device=/dev/video1 t. ! queue ! v4l2sink device=/dev/video2

+ Now, video1 & video2 are mirrors of video0. Go ahead and enjoy using video1 & video2. An example:

$ gst-launch-1.0 v4l2src device=/dev/video1 ! xvimagesink
www.pdf24.org    Send article as PDF   

Working with multiple Python environments

With the ample use of Python in applications all over, it is a common requirement that different applications need different combinations & conflicting versions of Python modules. Rather than having separate (real or virtual) machines with different installations for the different applications, it can be simply achieved using the Python’s virtualenv module. Here’s a quick summary of how to do it in Linux.

Install the python-virtualenv package, either using the package installer, or using pip of the desired python version:

$ sudo pip3 install virtualenv

Create a directory with the desired virtual environment, with or without the system wide installed packages, and the desired python version, as follows:

$ virtualenv --system-site-packages -p python3 ./venv

Or,

$ virtualenv --no-site-packages -p python3 ./venv

Here, venv (in the current directory) is the directory created with the desired virtual environment. Now, time to activate the virtual environment:

$ . ./venv/bin/activate

Now onwards, this shell’s prompt would be prefixed by (venv) indicating the virtual environment it is using. Whatever local done on this shell is specific only to this virtual environment, it being stored in this virtual environment’s directory. So, whatever pip installs (w/o sudo) are required for an application to run, can be done here independent of any external environment – even independent of the system wide installed packages, in case the virtual environment was created without them. All such installs would be local only to this environment without affecting the external environment.

Now the desired application, needing this environment, may be run in this environment.

Once done with the virtual environment, it can be deactivated as follows:

(venv) $ deactivate

It can be activated & deactivated as & when desired. Why just one? One may have any number of different virtual environments created and activated in parallel, just using separate directories and on separate shells – no need of separate machines.

www.pdf24.org    Send article as PDF   

Working with multiple Java versions

With ever growing use of Java in multi-platform applications, it is a common problem that different applications need different versions of Java. That means, we need to have multiple versions of Java installed on a system, and keep on switching between them.

But how? By adjusting the JAVA_HOME variable!!! This overall cumbersome looking task has been made quite trivial by using the Java environment manager (jenv) from Gildas Cuisinier. Overall installation & usage instructions on various platforms are given on its home page, as well as on its github repo. However, here’s a quick summary for Linux bash environment:

  • Download jenv in your desired location, say in the hidden .jenv directory under your home:
$ git clone https://github.com/jenv/jenv ~/.jenv
  • Add the jenv script in the execution path, basically add the following line in ~/.bash_profie or ~/.profile:
export PATH=${PATH}:~/.jenv/bin
  • Then, add jenv’s initialization, i.e. add the following line in ~/.bashrc:
eval "$(jenv init -)"
  • Now, logout and login back for all the settings to take effect. And after logging in, if the system has a pre-installed Java version,
$ jenv versions

should show one line with “* system (set by …”. For other Java version(s), we need to download and add.

  • In case, corresponding Java version is already downloaded & extracted, go to the next bullet. Otherwise, download the sources (.tar.gz or .bin) of various Java versions you need. Untar the .tar.gz or execute the .bin (after adding execute permissions to it). It would extract them in their corresponding folders. Shown below is one example of each.

Java Downloads

NB There are two variants of Java. One maintained by Oracle (called JDK, Java SE, …) and the other by open source community (called OpenJDK). Here are the corresponding links:

  1. JDK (Java SE, RE etc) is from Sun / Oracle (w/ Oracle Binary Code License)
  2. OpenJDK is the open source version of the above (w/ GNU General Public License version 2)
  • Now, time to add the downloaded & extracted Java version(s) using jenv add <extracted_java_folder>, e.g.
$ jenv add jdk1.6.0_45/
$ jenv add java-se-7u75-ri/

NB One may give the complete or relative path of the extracted folders. After this, the

$ jenv versions

should show something like this:

Java Versions

  • Now, switch to the desired Java version from the list using any of the following:
$ jenv local <version>

for only this current directory.

$ jenv shell <version>

for only this particular shell.

$ jenv global <version>

for globally.

$ jenv versions

should show the “*” (current setting) shifted appropriately.

www.pdf24.org    Send article as PDF   

Creating a micro SD Image

In today’s embedded world, it is very common to be creating bootable micro SDs. And then very often replicating them using raw dump of the micro SD (uSD) into an image file, say using dd command. And then creating another bootable uSD by flashing the image file into the uSD, again using dd command.

But what if one wants to create the image file by hand, rather than raw dumping from an existing uSD? Can it be done? Very well yes. And here follows the outline of the steps to follow:

First, create the image file for the appropriate size, say 1GiB:

$ dd if=/dev/zero of=file.img bs=32k count=32k

NB 1GiB has been achieved using 32 Kibi blocks of 32 KiB each, because nowadays 32KiB is quite an optimal block size in Linux.

Now, partition the image, with say 3 partitions (50MiB, 300MiB, remaining), using the GPT partition table type:

$ parted file.img mklabel gpt # Create a GPT partition table
$ parted file.img mkpart primary fat32 1MiB 51MiB # Create a partition for FAT32 fs
$ parted file.img mkpart primary 51MiB 351MiB
$ parted file.img mkpart primary 351MiB 2097151s

NB 2097151 is the last sector number in our image, but the third partition may not be able to extend till there, and the last command above may prompt for an alternative last sector. Accept it with “y”.

NB One may use any other partitioning utility like fdisk, as well.

Use the following commands to see partition details (in MiB & Byte units):

$ parted file.img unit MiB print
$ parted file.img unit B print

Here’s the expected output:

Partition Details using parted

Now, time to create filesystems on our partitions. But they are not visible as block device files. Then, how does one do it? That’s where loop device comes for rescue. Type the following commands to get the corresponding block device files:

$ sudo losetup -o 1MiB --sizelimit 50MiB -f file.img
$ sudo losetup -o 51MiB --sizelimit 300MiB -f file.img
$ sudo losetup -o 351MiB -f --sizelimit 705674752 file.img

NB 705674752 (refer to the print output above) is provided in bytes to be exact for the last partition

The three partitions would typically get available as /dev/loop0, /dev/loop1, /dev/loop2. You may verify using the following command:

$ sudo blockdev --report /dev/loop?

Here’s the expected output:

Loop (Block) Device File Details using blockdev

NB If the partition sizes are not multiple of 4096, their block size (BSZ) would reduce to 512.

Now, do all the usual partition operations. For example:

$ sudo mkfs.vfat /dev/loop0
$ sudo mkfs.ext4 /dev/loop1
$ sudo mkfs.ext4 /dev/loop2

Mount the required filesystems & copy the appropriate contents & unmount them. For example, for the first vfat filesystem:

$ sudo mount /dev/loop0 /mnt
$ sudo touch /mnt/om_arham.txt
$ sudo umount /mnt

Once done with content creation for all the filesystems, detach the loop devices:

$ sudo losetup -d /dev/loop2
$ sudo losetup -d /dev/loop1
$ sudo losetup -d /dev/loop0

All done – file.img is the desired uSD image. In fact, one may use the above learnings, to access the contents of an already existing uSD image, without raw dumping or flashing it in a uSD. Go ahead & try it out. And finally, why only an image of a uSD, you may create &/or decode any damn storage image, using just what has been learnt.

www.pdf24.org    Send article as PDF   
Google Circle
Join my Circle on Google+

Plugin by Social Author Bio