Category Archives: Linux Device Drivers

LFY series on Linux device drivers

File Systems: The Semester Project

This eighteenth article, which is part of the series on Linux device drivers, kick starts with introducing the concept of a file system, by simulating one in user space.

<< Seventeenth Article

If you have not yet guessed the semester project topic, then you should have a careful look at the table of contents of the various books on Linux device drivers. You’ll find that no book lists a chapter on file systems. Not because they are not used or not useful. In fact, there are close to 100 different file systems existing, as of today, and possibly most of them in active use. Then, why are they not talked in general? And are so many file systems required? Which one of them is the best? Let’s understand these one by one.

In reality, there is no best file system, and in fact there may not be one to come, as well. This is because a particular file system suits best only for a particular set of requirements. Thus, different requirements ask for different best file systems, leading to so many active file systems and many more to come. So, basically they are not generic enough to be talked generically, rather more of a research topic. And the reason for them being the toughest of all drivers is simple: Least availability of generic written material – the best documentation being the code of various file systems. Hmmm! Sounds perfect for a semester project, right?

In order to get to do such a research project, a lot of smaller steps has to be taken. The first one being understanding the concept of a file system itself. That could be done best by simulating one in user space. Shweta took the ownership of getting this first step done, as follows:

  • Use a regular file for a partition, that is for the actual data storage of the file system (Hardware space equivalent)
  • Design a basic file system structure (Kernel space equivalent) and implement the format/mkfs command for it, over the regular file
  • Provide an interface/shell to type commands and operate on the file system, similar to the usual bash shell. In general, this step is achieved by the shell along with the corresponding file system driver in kernel space. But here that translation is embodied/simulated into the user space interface, itself. (Next article shall discuss this in detail)

The default regular file chosen is .sfsf (simula-ting file system file) in the current directory. Starting . (dot) is for it to be hidden. The basic file system design mainly contains two structures, the super block containing the info about the file system, and the file entry structure containing the info about each file in the file system. Here are Shweta’s defines and structures defined in sfs_ds.h

#ifndef SFS_DS_H
#define SFS_DS_H

#define SIMULA_FS_TYPE 0x13090D15 /* Magic Number for our file system */
#define SIMULA_FS_BLOCK_SIZE 512 /* in bytes */
#define SIMULA_FS_ENTRY_SIZE 64 /* in bytes */
#define SIMULA_FS_DATA_BLOCK_CNT ((SIMULA_FS_ENTRY_SIZE - (16 + 3 * 4)) / 4)

#define SIMULA_DEFAULT_FILE ".sfsf"

typedef unsigned int uint4_t;

typedef struct sfs_super_block
{
	uint4_t type; /* Magic number to identify the file system */
	uint4_t block_size; /* Unit of allocation */
	uint4_t partition_size; /* in blocks */
	uint4_t entry_size; /* in bytes */ 
	uint4_t entry_table_size; /* in blocks */
	uint4_t entry_table_block_start; /* in blocks */
	uint4_t entry_count; /* Total entries in the file system */
	uint4_t data_block_start; /* in blocks */
	uint4_t reserved[SIMULA_FS_BLOCK_SIZE / 4 - 8];
} sfs_super_block_t; /* Making it of SIMULA_FS_BLOCK_SIZE */

typedef struct sfs_file_entry
{
	char name[16];
	uint4_t size; /* in bytes */
	uint4_t timestamp; /* Seconds since Epoch */
	uint4_t perms; /* Permissions for user */
	uint4_t blocks[SIMULA_FS_DATA_BLOCK_CNT];
} sfs_file_entry_t;

#endif

Note that Shweta has put some redundant fields in the sfs_super_block_t rather than computing them every time. And practically that is not space inefficient, as any way lot of empty reserved space is available in the super block, which is expected to be the complete first block of a partition. For example, entry_count is a redundant field as it is same as the entry table’s size (in bytes) divided by the entry’s size, which both are part of the super block structure. Moreover, the data_block_start number could also be computed by how many blocks have been used before it, but again not preferred.

Also, note the hard coded assumptions made about the block size of 512 bytes, and some random magic number for the simula file system. This magic number is used to verify that we are dealing with the right file system, when operating on it.

Coming to the file entry, it contains the following for every file:

  • Name of up to 15 characters (1 byte for ”)
  • Size in bytes
  • Time stamp (creation or modification? Not yet decided by Shweta)
  • Permissions (just one set, for the user)
  • Array of block numbers for up to 9 data blocks. Why 9? To make the complete entry of SIMULA_FS_ENTRY_SIZE (64).

With the file system design ready, the make file system (mkfs) or more commonly known format application is implemented next, as format_sfs.c:

#include <stdio.h>
#include <stdlib.h>
#include <sys/types.h>
#include <sys/stat.h>
#include <fcntl.h>
#include <unistd.h>

#include "sfs_ds.h"

#define SFS_ENTRY_RATIO 0.10 /* 10% of all blocks */
#define SFS_ENTRY_TABLE_BLOCK_START 1

sfs_super_block_t sb =
{
	.type = SIMULA_FS_TYPE,
	.block_size = SIMULA_FS_BLOCK_SIZE,
	.entry_size = SIMULA_FS_ENTRY_SIZE,
	.entry_table_block_start = SFS_ENTRY_TABLE_BLOCK_START
};
sfs_file_entry_t fe; /* All 0's */

void write_super_block(int sfs_handle, sfs_super_block_t *sb)
{
	write(sfs_handle, sb, sizeof(sfs_super_block_t));
}
void clear_file_entries(int sfs_handle, sfs_super_block_t *sb)
{
	int i;

	for (i = 0; i < sb->entry_count; i++)
	{
		write(sfs_handle, &fe, sizeof(fe));
	}
}
void mark_data_blocks(int sfs_handle, sfs_super_block_t *sb)
{
	char c = 0;

	lseek(sfs_handle, sb->partition_size * sb->block_size - 1, SEEK_SET);
	write(sfs_handle, &c, 1); /* To make the file size to partition size */
}

int main(int argc, char *argv[])
{
	int sfs_handle;

	if (argc != 2)
	{
		fprintf(stderr, "Usage: %s <partition size in 512-byte blocks>\n",
			argv[0]);
		return 1;
	}
	sb.partition_size = atoi(argv[1]);
	sb.entry_table_size = sb.partition_size * SFS_ENTRY_RATIO;
	sb.entry_count = sb.entry_table_size * sb.block_size / sb.entry_size;
	sb.data_block_start = SFS_ENTRY_TABLE_BLOCK_START + sb.entry_table_size;

	sfs_handle = creat(SIMULA_DEFAULT_FILE,
		S_IRUSR | S_IWUSR | S_IRGRP | S_IROTH);
	if (sfs_handle == -1)
	{
		perror("No permissions to format");
		return 2;
	}
	write_super_block(sfs_handle, &sb);
	clear_file_entries(sfs_handle, &sb);
	mark_data_blocks(sfs_handle, &sb);
	close(sfs_handle);
	return 0;
}

Note the heuristic of 10% of total space to be used for the file entries, defined by SFS_ENTRY_RATIO. Apart from that, this format application takes the partition size as input and accordingly creates the default file .sfsf, with:

  • Its first block as the super block, using write_super_block()
  • The next 10% of total blocks as the file entry’s zeroed out table, using clear_file_entries()
  • And the remaining as the data blocks, using mark_data_blocks(). This basically writes a zero at the end, to actually extend the underlying file .sfsf to the partition size

As any other format/mkfs application, format_sfs.c is compiled using gcc, and executed on the shell as follows:

$ gcc format_sfs.c -o format_sfs
$ ./format_sfs 1024 # Partition size in blocks of 512-bytes
$ ls -al # List the .sfsf created with a size of 512 KiBytes

Figure 33 shows the .sfsf pictorially for the partition size of 1024 blocks of 512 bytes each.

Figure 33: Simula file system on 512 KiB of partition (.sfsf)

Figure 33: Simula file system on 512 KiB of partition (.sfsf)

Summing up

With the above design of simula file system (sfs_ds.h), along with the implementation for its format command (format_sfs.c), Shweta has thus created the empty file system over the simulated partition .sfsf. Now, as listed earlier, the final step in simulating the user space file system is to create the interface/shell to type commands and operate on the empty file system just created on .sfsf. Let’s watch out for Shweta coding that as well, completing the first small step towards understanding their project.

PDF Printer    Send article as PDF   

Module Interactions

This seventeenth article, which is part of the series on Linux device drivers, demonstrates various interactions with a Linux module.

<< Sixteenth Article

As Shweta and Pugs are gearing up for their final semester project in Linux drivers, they are closing on some final tidbits of technical romancing. This mainly includes the various communications with a Linux module (dynamically loadable and unload-able driver), namely accessing its variables, calling its functions, and passing parameters to it.

Global variables and functions

One might think as what a big deal in accessing the variables and functions of a module, outside it. Just make them global, declare them extern in a header, include the header, and access. In the general application development paradigm, it is this simple – but in kernel development environment, it is not so. Though, recommendations to make everything static by default, has always been there, there were and are cases where non-static globals may be needed. A simple example could be a driver spanning over multiple files and function(s) from one file needed to be called in the other. Now to avoid any kernel collision even with such cases, every module is embodied in its own name space. And we know that two modules with the same name cannot be loaded at the same time. Thus by default zero collision is achieved. However, this also implies that by default nothing from a module can be made really global throughout the kernel, even if we want to. And exactly for such scenarios, the <linux/module.h> header defines the following macros:

EXPORT_SYMBOL(sym)
EXPORT_SYMBOL_GPL(sym)
EXPORT_SYMBOL_GPL_FUTURE(sym)

Each of these exports the symbol passed as their parameter, with additionally putting them in the default, _gpl and _gpl_future sections, respectively. And hence only one of them has to be used for a particular symbol – though the symbol could be either a variable name or a function name. Here’s the complete code (our_glob_syms.c) to demonstrate the same:

#include <linux/module.h>
#include <linux/device.h>

static struct class *cool_cl;
static struct class *get_cool_cl(void)
{
	return cool_cl;
}
EXPORT_SYMBOL(cool_cl);
EXPORT_SYMBOL_GPL(get_cool_cl);

static int __init glob_sym_init(void)
{
	if (IS_ERR(cool_cl = class_create(THIS_MODULE, "cool")))
	/* Creates /sys/class/cool/ */
	{
		return PTR_ERR(cool_cl);
	}
	return 0;
}

static void __exit glob_sym_exit(void)
{
	/* Removes /sys/class/cool/ */
	class_destroy(cool_cl);
}

module_init(glob_sym_init);
module_exit(glob_sym_exit);

MODULE_LICENSE("GPL");
MODULE_AUTHOR("Anil Kumar Pugalia <email@sarika-pugs.com>");
MODULE_DESCRIPTION("Global Symbols exporting Driver");

Each exported symbol also have a corresponding structure placed into each of the kernel symbol table (__ksymtab), kernel string table (__kstrtab), and kernel CRC table (__kcrctab) sections, marking it to be globally accessible. Figure 30 shows a filtered snippet of the /proc/kallsyms kernel window, before and after loading the module our_glob_syms.ko, which has been compiled using the driver’s usual makefile.

Figure 30: Our global symbols module

Figure 30: Our global symbols module

The following code shows the supporting header file (our_glob_syms.h), to be included by modules using the exported symbols cool_cl and get_cool_cl:

#ifndef OUR_GLOB_SYMS_H
#define OUR_GLOB_SYMS_H

#ifdef __KERNEL__
#include <linux/device.h>

extern struct class *cool_cl;
extern struct class *get_cool_cl(void);
#endif

#endif

Figure 30 also shows the file Module.symvers, generated by compilation of the module our_glob_syms. This contains the various details of all the exported symbols in its directory. Apart from including the above header file, the modules using the exported symbols, possibly should have this file Module.symvers in their build directory.

Note the <linux/device.h> header in the above examples, is being included for the various class related declarations & definitions, which has been already covered under the character drivers discussions.

Module Parameters

Being aware of passing command line arguments to an application, it is a natural quest to ask if something similar can be done with a module. And the answer is yes. Parameters can be passed to a module along with loading it, say using insmod. Interestingly enough and in contrast with the command line arguments to an application, these can be modified even later as well, through sysfs interactions.

The module parameters are setup using the following macro (defined in <linux/moduleparam.h>, included through <linux/module.h>):

module_param(name, type, perm)

where, name is the parameter name, type is the type of the parameter, and perm is the permissions of the sysfs file corresponding to this parameter. Supported type values are: byte, short, ushort, int, uint, long, ulong, charp (character pointer), bool or invbool (inverted boolean). The following module code (module_param.c) demonstrates a module parameter:

#include <linux/module.h>
#include <linux/kernel.h>

static int cfg_value = 3;
module_param(cfg_value, int, 0764);

static int __init mod_par_init(void)
{
	printk(KERN_INFO "Loaded with %d\n", cfg_value);
	return 0;
}

static void __exit mod_par_exit(void)
{
	printk(KERN_INFO "Unloaded cfg value: %d\n", cfg_value);
}

module_init(mod_par_init);
module_exit(mod_par_exit);

MODULE_LICENSE("GPL");
MODULE_AUTHOR("Anil Kumar Pugalia <email@sarika-pugs.com>");
MODULE_DESCRIPTION("Module Parameter demonstration Driver");

Note that before the parameter setup, a variable of the same name and compatible type needs to be defined.

Subsequently, the following steps and experiments are shown in Figures 31 and 32:

  • Building the driver (module_param.ko file) using the driver’s usual makefile
  • Loading the driver using insmod (with and without parameters)
  • Various experiments through the corresponding /sys entries
  • And finally, unloading the driver using rmmod
Figure 31: Experiments with module parameter

Figure 31: Experiments with module parameter

Figure 32: Experiments with module parameter (as root)

Figure 32: Experiments with module parameter (as root)

Observe the following:

  • Initial value (3) of cfg_value becomes its default value when insmod is done without any parameters
  • Permission 0764 gives rwx to the user, rw- to the group, and r– for the others on the file cfg_value under parameters of module_param under /sys/module/

Check for yourself:

  • Output of dmesg | tail on every insmod and rmmod for the output of printk‘s
  • Try writing into the /sys/module/module_param/parameters/cfg_value file as a normal user

Summing up

With this, the duo have a fairly good understanding of Linux drivers and they are all set to start working on their final semester project. Any guesses on what their topic is all about? Hint: They have picked up one of the most daunting Linux driver topic. Let us see, how they fair at it.

Eighteenth Article >>

Create PDF    Send article as PDF   

Kernel Window: Peeping through /proc

This sixteenth article, which is part of the series on Linux device drivers, demonstrates the creation and usage of files under the /proc virtual file-system.

<< Fifteenth Article

Useful kernel windows

After many months, Shweta and Pugs got together for a peaceful technical romancing. All through, we have been using all kinds of kernel windows especially through the /proc virtual file-system (using cat), to help us out in decoding the various nitty-gritties of Linux device drivers. Here’s a non-exhaustive summary listing:

  • /proc/modules – Listing of all the dynamically loaded modules
  • /proc/devices – Listing of all the registered character and block major numbers
  • /proc/iomem – Listing of on-system physical RAM & bus device addresses
  • /proc/ioports – Listing of on-system I/O port addresses (specially for x86 systems)
  • /proc/interrupts – Listing of all the registered interrupt request numbers
  • /proc/softirqs – Listing of all the registered soft irqs
  • /proc/kallsyms – Listing of all the running kernel symbols, including from loaded modules
  • /proc/partitions – Listing of currently connected block devices & their partitions
  • /proc/filesystems – Listing of currently active file-system drivers
  • /proc/swaps – Listing of currently active swaps
  • /proc/cpuinfo – Information about the CPU(s) on the system
  • /proc/meminfo – Information about the memory on the system, viz. RAM, swap, …

Custom kernel windows

“Yes, these have been really helpful in understanding and debugging the Linux device drivers. But is it possible for us to also provide some help? Yes, I mean can we create one such kernel window through /proc?”, questioned Shweta.

“Why one? As many as you want. And that’s simple – just use the right set of APIs and there you go.”

“For you every thing is simple.”

“No yaar, this is seriously simple”, smiled Pugs. “Just watch out, me creating one for you.”

And in jiffies, Pugs created the proc_window.c file below:

#include <linux/module.h>
#include <linux/kernel.h>
#include <linux/proc_fs.h>
#include <linux/jiffies.h>

static struct proc_dir_entry *parent, *file, *link;
static int state = 0;

int time_read(char *page, char **start, off_t off, int count, int *eof, void *data)
{
	int len, val;
	unsigned long act_jiffies;

	len = sprintf(page, "state = %d\n", state);
	act_jiffies = jiffies - INITIAL_JIFFIES;
	val = jiffies_to_msecs(act_jiffies);
	switch (state)
	{   
		case 0:
			len += sprintf(page + len, "time = %ld jiffies\n",
				act_jiffies);
			break;
		case 1:
			len += sprintf(page + len, "time = %d msecs\n", val);
			break;
		case 2:
			len += sprintf(page + len, "time = %ds %dms\n",
					val / 1000, val % 1000);
			break;
		case 3:
			val /= 1000;
			len += sprintf(page + len, "time = %02d:%02d:%02d\n",
					val / 3600, (val / 60) % 60, val % 60);
			break;
		default:
			len += sprintf(page + len, "<not implemented>\n");
			break;
	}
	len += sprintf(page + len, "{offset = %ld; count = %d;}\n", off, count);

	return len;
}
int time_write(struct file *file, const char __user *buffer, unsigned long count,
	void *data)
{
	if (count > 2)
		return count;
	if ((count == 2) && (buffer[1] != '\n'))
		return count;
	if ((buffer[0] < '0') || ('9' < buffer[0]))
		return count;
	state = buffer[0] - '0';
	return count;
}

static int __init proc_win_init(void)
{
	if ((parent = proc_mkdir("anil", NULL)) == NULL)
	{
		return -1;
	}
	if ((file = create_proc_entry("rel_time", 0666, parent)) == NULL)
	{
		remove_proc_entry("anil", NULL);
		return -1;
	}
	file->read_proc = time_read;
	file->write_proc = time_write;
	if ((link = proc_symlink("rel_time_l", parent, "rel_time")) == NULL)
	{
		remove_proc_entry("rel_time", parent);
		remove_proc_entry("anil", NULL);
		return -1;
	}
	link->uid = 0;
	link->gid = 100;
	return 0;
}

static void __exit proc_win_exit(void)
{
	remove_proc_entry("rel_time_l", parent);
	remove_proc_entry("rel_time", parent);
	remove_proc_entry("anil", NULL);
}

module_init(proc_win_init);
module_exit(proc_win_exit);

MODULE_LICENSE("GPL");
MODULE_AUTHOR("Anil Kumar Pugalia <email@sarika-pugs.com>");
MODULE_DESCRIPTION("Kernel window /proc Demonstration Driver");

And then Pugs did the following steps:

  • Built the driver (proc_window.ko file) using the usual driver’s Makefile.
  • Loaded the driver using insmod.
  • Showed various experiments using the newly created proc windows. (Refer to Figure 28.)
  • And finally, unloaded the driver using rmmod.
Figure 28: Peeping through /proc

Figure 28: Peeping through /proc

Demystifying the details

Starting from the constructor proc_win_init(), three proc entries are created:

  • Directory anil under /proc (i.e. NULL parent) with default permissions 0755, using proc_mkdir()
  • Regular file rel_time under the above directory anil with permissions 0666, using create_proc_entry()
  • Soft link rel_time_l to the above file rel_time under the same directory anil, using proc_symlink()

The corresponding removal of these three is achieved using remove_proc_entry() in the destructor proc_win_exit(), in chronological reverse order.

For every entry created under /proc, a corresponding struct proc_dir_entry is created. After which many of its fields could be further updated as needed:

  • mode – Permissions of the file
  • uid – User ID of the file
  • gid – Group ID of the file

Additionally, for a regular file, the following two function pointers for read and write over the file could be provided, respectively:

  • int (*read_proc)(char *page, char **start, off_t off, int count, int *eof, void *data)
  • int (*write_proc)(struct file *file, const char __user *buffer, unsigned long count, void *data)

write_proc() is very similar to the character driver’s file operation write(). And the above implementation, lets the user to write a digit from 0 to 9 and accordingly sets the internal state. read_proc() in the above implementation provides the current state and the time since the system has been booted up in different units based on the current state. These are, jiffies in state 0; milliseconds in state 1; seconds & milliseconds in state 2; hours : minutes : seconds in state 3; and in other states. And to check the computation accuracy, Figure 29 highlights the system up time in the output of top. read_proc’s page parameter is a page-sized buffer, typically to be filled up with count bytes from offset off. But more often than not (because of small content), just page is filled up, ignoring all other parameters.

Figure 29: Comparison with top's output

Figure 29: Comparison with top’s output

All the /proc related structure definitions and function declarations are available through <linux/proc_fs.h>. And the jiffies related function declarations and macro definitions are in <linux/jiffies.h>. On a special note, the actual jiffies are being calculated by subtracting INITIAL_JIFFIES, as on boot-up, jiffies is initialized to INITIAL_JIFFIES instead of zero.

Summing up

“Hey Pugs!!! Why did you create the folder named as anil? Who is this Anil? You could have used my name or may be yours.” suggested Shweta. “Ha!! That’s a surprise. My real name is Anil only. It is just that everyone in college knows me only as Pugs”, smiled Pugs. Watch out for further technical romancing of Pugs aka Anil.

Seventeenth Article >>

Notes

  1. One of the powerful usage of creating our own proc window is for customized debugging by querying. A simple but cool modification of the above driver, could be instead of getting the jiffies, it could read / write hardware registers.
Fax Online    Send article as PDF   

Disk on RAM: Playing destructively

This fifteenth article, which is part of the series on Linux device drivers, experiments with a dummy hard disk on RAM to demonstrate the block drivers.

<< Fourteenth Article

Play first, rules later

After a delicious lunch, theory makes audience sleepy. So, let’s start with the code itself. Code demonstrated is available at dor_code.tar.bz2. This tar ball contains 3 ‘C’ source files, 2 ‘C’ headers, and a Makefile. As usual, executing make will build the ‘disk on ram’ driver (dor.ko) – this time combining the 3 ‘C’ files. Check out the Makefile to see how. make clean would do the usual clean of the built stuff.

Once built, the following are the experimenting steps (Refer to Figures 25, 26, 27):

  • Load the driver dor.ko using insmod. This would create the block device files representing the disk on 512 KibiBytes (KiB) of RAM, with 3 primary and 3 logical partitions.
  • Checkout the automatically created block device files (/dev/rb*). /dev/rb is the entire disk of 512 KiB size. rb1, rb2, rb3 are the primary partitions with rb2 being the extended partition and containing the 3 logical partitions rb5, rb6, rb7.
  • Read the entire disk (/dev/rb) using the disk dump utility dd.
  • Zero out the first sector of the disk’s first partition (/dev/rb1) again using dd.
  • Write some text into the disk’s first partition (/dev/rb1) using cat.
  • Display the initial contents of the first partition (/dev/rb1) using the xxd utility. See Figure 26 for the xxd output.
  • Display the partition info for the disk using fdisk. See Figure 27 for the fdisk output.
  • (Quick) Format the third primary partition (/dev/rb3) as vfat filesystem (like your pen drive), using mkfs.vfat (Figure 27).
  • Mount the newly formatted partition using mount, say at /mnt (Figure 27).
  • Disk usage utility df would now show this partition mounted at /mnt (Figure 27). You may go ahead and store your files there. But, please remember that these partitions are all on a disk on RAM, and so non-persistent. Hence,
  • Unloading the driver using ‘rmmod dor‘ would vanish everything. Though the partition needs to be unmounted using ‘umount /mnt‘ before doing that.

Please note that all the above experimenting steps need to be executed with root privileges.

Figure 25: Playing with 'Disk on RAM' driver

Figure 25: Playing with ‘Disk on RAM’ driver

Figure 26: xxd showing the initial data on the first partition (/dev/rb1)

Figure 26: xxd showing the initial data on the first partition (/dev/rb1)

Figure 27: Formatting the third partition (/dev/rb3)

Figure 27: Formatting the third partition (/dev/rb3)

Now, let’s learn the rules

We have just now played around with the disk on RAM but without actually knowing the rules, i.e. the internal details of the game. So, let’s dig into the nitty-gritties to decode the rules. Each of the three .c files represent a specific part of the driver. ram_device.c and ram_device.h abstract the underlying RAM operations like vmalloc/vfree, memcpy, etc, providing disk operation APIs like init/cleanup, read/write, etc. partition.c and partition.h provide the functionality to emulate the various partition tables on the disk on RAM. Recall the pre-lunch session (i.e. the previous article) to understand the details of partitioning. The code in this is responsible for the partition information like number, type, size, etc that is shown up on the disk on RAM using fdisk. ram_block.c is the core block driver implementation exposing the disk on RAM as the block device files (/dev/rb*) to the user-space. In other words, the four files ram_device.* and partition.* form the horizontal layer of the device driver and ram_block.c forms the vertical (block) layer of the device driver. So, let’s understand that in detail.

The block driver basics

Conceptually, the block drivers are very similar to character drivers, especially with regards to the following:

  • Usage of device files
  • Major and minor numbers
  • Device file operations
  • Concept of device registration

So, if one already knows character driver implementations, it would be similar to understand the block drivers. Though, they are definitely not identical. The key differences could be listed out as follows:

  • Abstraction for block-oriented versus byte-oriented devices
  • Block drivers are designed to be used by I/O schedulers, for optimal performance. Compare that with character drivers to be used by VFS.
  • Block drivers are designed to be integrated with the Linux’ buffer cache mechanism for efficient data access. Character drivers are pass-through drivers, accessing the hardware directly.

And these trigger the implementation differences. Let’s analyze the key code snippets from ram_block.c, starting at the driver’s constructor rb_init().

First step is to register for a 8-bit (block) major number. And registering for that implicitly means registering for all the 256 8-bit minor numbers associated with that. The function for that is:

int register_blkdev(unsigned int major, const char *name);

major is the major number to be registered. name is a registration label displayed under the kernel window /proc/devices. Interestingly, register_blkdev() tries to allocate & register a freely available major number, when 0 is passed for its first parameter major; and on success, the allocated major number is returned. The corresponding deregistration function is:

void unregister_blkdev(unsigned int major, const char *name);

Both are prototyped in <linux/fs.h>

Second step is to provide the device file operations, through the struct block_device_operations (prototyped in <linux/blkdev.h>) for the registered major number device files. However, these operations are too few compared to the character device file operations, and mostly insignificant. To elaborate, there are no operations even to read and write. That’s surprising. But as we already know that the block drivers need to integrate with the I/O schedulers, the read-write implementation is achieved through something called request queues. So, along with providing the device file operations, the following needs to provided:

  • Request queue for queuing the read/write requests
  • Spin lock associated with the request queue for its concurrent access protection
  • Request function to process the requests queued in the request queue

Also, there is no separate interface for block device file creations, so the following are also provided:

  • Device file name prefix, commonly referred as disk_name (“rb” in the dor driver)
  • Starting minor number for the device files, commonly referred as the first_minor

Finally, two block device-specific things are also provided along with the above, namely:

  • Maximum number of partitions supported for this block device, by specifying the total minors
  • Underlying device size in units of 512-byte sectors, for the logical block access abstraction

All these are registered through the struct gendisk using the function:

void add_disk(struct gendisk *disk);

The corresponding delete function is:

void del_gendisk(struct gendisk *disk);

Prior to add_disk(), the various fields of struct gendisk need to be initialized, either directly or using various macros/functions like set_capacity(). major, first_minor, fops, queue, disk_name are the minimal fields to be initialized directly. And even before the initialization of these fields, the struct gendisk needs to be allocated using the function:

struct gendisk *alloc_disk(int minors);

where minors is the total number of partitions supported for this disk. And the corresponding inverse function would be:

void put_disk(struct gendisk *disk);

All these are prototyped in <linux/genhd.h>.

Request queue and its request processing function

The request queue also needs to be initialized and set up into the struct gendisk, before the add_disk(). The request queue is initialized by calling:

struct request_queue *blk_init_queue(request_fn_proc *, spinlock_t *);

providing the request processing function and the initialized concurrency protection spinlock, as its parameters. The corresponding queue cleanup function is:

void blk_cleanup_queue(struct request_queue *);

The request (processing) function should be defined with the following prototype:

void request_fn(struct request_queue *q);

And it should be coded to fetch a request from its parameter q, say using

struct request *blk_fetch_request(struct request_queue *q);

and then either process it or initiate the processing. Whatever it does, it should be non-blocking, as this request function is called from a non-process context, and also after taking the queue’s spinlock. So, moreover only the functions not releasing or taking the queue’s spinlock should be used within the request function.

A typical request processing as demonstrated by the function rb_request() in ram_block.c is:

while ((req = blk_fetch_request(q)) != NULL) /* Fetching a request */
{
	/* Processing the request: the actual data transfer */
	ret = rb_transfer(req); /* Our custom function */
	/* Informing that the request has been processed with return of ret */
	__blk_end_request_all(req, ret);
}

Request and its processing

rb_transfer() is our key function, which parses a struct request and accordingly does the actual data transfer. The struct request mainly contains the direction of data transfer, starting sector for the data transfer, total number of sectors for the data transfer, and the scatter-gather buffer for data transfer. The various macros to extract these information from the struct request are as follows:

rq_data_dir(req); /* Operation: 0 - read from device; otherwise - write to device */
blk_req_pos(req); /* Starting sector to process */
blk_req_sectors(req); /* Total sectors to process */
rq_for_each_segment(bv, req, iter) /* Iterator to extract individual buffers */

rq_for_each_segment() is the special one which iterates over the struct request (req) using iter, and extracting the individual buffer information into the struct bio_vec (bv: basic input/output vector) on each iteration. And, then on each extraction, the appropriate data transfer is done, based on the operation type, invoking one of the following APIs from ram_device.c:

void ramdevice_write(sector_t sector_off, u8 *buffer, unsigned int sectors);
void ramdevice_read(sector_t sector_off, u8 *buffer, unsigned int sectors);

Check out the complete code of rb_transfer() in ram_block.c

Summing up

“With that, we have actually learnt the beautiful block drivers by traversing through the design of a hard disk, and playing around with partitioning, formatting, and various other raw operations on a hard disk. Thanks for your patient listening. Now, the session is open for questions. Or, you may post your queries as comments, below.”

Sixteenth Article >>

www.pdf24.org    Send article as PDF   

Understanding the Partitions: A dive inside the hard disk

This fourteenth article, which is part of the series on Linux device drivers, takes you for a walk inside a hard disk.

<< Thirteenth Article

Hard disk design

“Doesn’t it sound like a mechanical engineering subject: Design of hard disk?”, questioned Shweta. “Yes, it does. But understanding it gets us an insight into its programming aspect”, reasoned Pugs while waiting for the commencement of the seminar on storage systems.

Figure 23: Partition listing by fdisk

Figure 23: Partition listing by fdisk

The seminar started with a few hard disks in the presenter’s hand and then a dive down into her system showing the output of fdisk -l, as shown in Figure 23. The first line shows the hard disk size in human friendly format and in bytes. The second line mentions the number of logical heads, logical sectors per track, and the actual number of cylinders on the disk – these together are referred as the geometry of the disk. The 255 heads indicating the number of platters or disks, as one read-write head is needed per disk. Let’s number them say D1, D2, …, D255. Now, each disk would have the same number of concentric circular tracks, starting from outside to inside. In the above case there are 60801 such tracks per disk. Let’s number them say T1, T2, …, T60801. And a particular track number from all the disks forms a cylinder of the same number. For example, tracks T2 from D1, D2, …, D255 will all together form the cylinder C2. Now, each track has the same number of logical sectors – 63 in our case, say S1, S2, …, S63. And each sector is typically 512 bytes. Given this data, one can actually compute the total usable hard disk size, using the following formula:

Usable hard disk size in bytes = (Number of heads or disks) * (Number of tracks per disk) * (Number of sectors per track) * (Number of bytes per sector, i.e. sector size).

For the disk under consideration it would be: 255 * 60801 * 63 * 512 bytes = 500105249280 bytes. Note that this number may be slightly less than the actual hard disk – 500107862016 bytes in our case. The reason for that is that the formula doesn’t consider the bytes in last partial or incomplete cylinder. And the primary reason for that is the difference between today’s technology of organizing the actual physical disk geometry and the traditional geometry representation using heads, cylinders, sectors. Note that in the fdisk output, we referred to the heads, and sectors per track as logical not the actual one. One may ask, if today’s disks doesn’t have such physical geometry concepts, then why to still maintain that and represent them in more of logical form. The main reason is to be able to continue with same concepts of partitioning and be able to maintain the same partition table formats especially for the most prevalent DOS type partition tables, which heavily depend on this simplistic geometry. Note the computation of cylinder size (255 heads * 63 sectors / track * 512 bytes / sector = 8225280 bytes) in the third line and then the demarcation of partitions in units of complete cylinders.

DOS type partition tables

This brings us to the next important topic of understanding the DOS type partition tables. But in the first place, what is a partition and rather why to even partition? A hard disk can be divided into one more logical disks, each of which is called a partition. And this is useful for organizing different types of data separately. For example, different operating system data, user data, temporary data, etc. So, partitions are basically logical divisions and hence need to be maintained through some meta data, which is the partition table. A DOS type partition table contains 4 partition entries, each being a 16-byte entry. And each of these entries can be depicted by the following ‘C’ structure:

typedef struct
{
	unsigned char boot_type; // 0x00 - Inactive; 0x80 - Active (Bootable)
	unsigned char start_head;
	unsigned char start_sec:6;
	unsigned char start_cyl_hi:2;
	unsigned char start_cyl;
	unsigned char part_type;
	unsigned char end_head;
	unsigned char end_sec:6;
	unsigned char end_cyl_hi:2;
	unsigned char end_cyl;
	unsigned long abs_start_sec;
	unsigned long sec_in_part;
} PartEntry;

And this partition table followed by the two-byte signature 0xAA55 resides at the end of hard disk’s first sector, commonly known as Master Boot Record or MBR (in short). Hence, the starting offset of this partition table within the MBR is 512 – (4 * 16 + 2) = 446. Also, a 4-byte disk signature is placed at the offset 440. The remaining top 440 bytes of the MBR are typically used to place the first piece of boot code, that is loaded by the BIOS to boot up the system from the disk. Listing of part_info.c contains these various defines, and the code for parsing and printing a formatted output of the partition table of one’s hard disk.

From the partition table entry structure, it could be noted that the start and end cylinder fields are only 10 bits long, thus allowing a maximum of 1023 cylinders only. However, for today’s huge hard disks, this size is no way sufficient. And hence in overflow cases, the corresponding <head, cylinder, sector> triplet in the partition table entry is set to the maximum value, and the actual value is computed using the last two fields: the absolute start sector number (abs_start_sec) and the number of sectors in this partition (sec_in_part). Listing of part_info.c contains the code for this as well.

#include <stdio.h>
#include <sys/types.h>
#include <sys/stat.h>
#include <fcntl.h>
#include <unistd.h>

#define SECTOR_SIZE 512
#define MBR_SIZE SECTOR_SIZE
#define MBR_DISK_SIGNATURE_OFFSET 440
#define MBR_DISK_SIGNATURE_SIZE 4
#define PARTITION_TABLE_OFFSET 446
#define PARTITION_ENTRY_SIZE 16 // sizeof(PartEntry)
#define PARTITION_TABLE_SIZE 64 // sizeof(PartTable)
#define MBR_SIGNATURE_OFFSET 510
#define MBR_SIGNATURE_SIZE 2
#define MBR_SIGNATURE 0xAA55
#define BR_SIZE SECTOR_SIZE
#define BR_SIGNATURE_OFFSET 510
#define BR_SIGNATURE_SIZE 2
#define BR_SIGNATURE 0xAA55

typedef struct
{
	unsigned char boot_type; // 0x00 - Inactive; 0x80 - Active (Bootable)
	unsigned char start_head;
	unsigned char start_sec:6;
	unsigned char start_cyl_hi:2;
	unsigned char start_cyl;
	unsigned char part_type;
	unsigned char end_head;
	unsigned char end_sec:6;
	unsigned char end_cyl_hi:2;
	unsigned char end_cyl;
	unsigned long abs_start_sec;
	unsigned long sec_in_part;
} PartEntry;

typedef struct
{
	unsigned char boot_code[MBR_DISK_SIGNATURE_OFFSET];
	unsigned long disk_signature;
	unsigned short pad;
	unsigned char pt[PARTITION_TABLE_SIZE];
	unsigned short signature;
} MBR;

void print_computed(unsigned long sector)
{
	unsigned long heads, cyls, tracks, sectors;

	sectors = sector % 63 + 1 /* As indexed from 1 */;
	tracks = sector / 63;
	cyls = tracks / 255 + 1 /* As indexed from 1 */;
	heads = tracks % 255;
	printf("(%3d/%5d/%1d)", heads, cyls, sectors);
}

int main(int argc, char *argv[])
{
	char *dev_file = "/dev/sda";
	int fd, i, rd_val;
	MBR m;
	PartEntry *p = (PartEntry *)(m.pt);

	if (argc == 2)
	{
		dev_file = argv[1];
	}
	if ((fd = open(dev_file, O_RDONLY)) == -1)
	{
		fprintf(stderr, "Failed opening %s: ", dev_file);
		perror("");
		return 1;
	}
	if ((rd_val = read(fd, &m, sizeof(m))) != sizeof(m))
	{
		fprintf(stderr, "Failed reading %s: ", dev_file);
		perror("");
		close(fd);
		return 2;
	}
	close(fd);
	printf("\nDOS type Partition Table of %s:\n", dev_file);
	printf("  B Start (H/C/S)   End (H/C/S) Type  StartSec    TotSec\n");
	for (i = 0; i < 4; i++)
	{
		printf("%d:%d (%3d/%4d/%2d) (%3d/%4d/%2d)  %02X %10d %9d\n",
			i + 1, !!(p[i].boot_type & 0x80),
			p[i].start_head,
			1 + ((p[i].start_cyl_hi << 8) | p[i].start_cyl),
			p[i].start_sec,
			p[i].end_head,
			1 + ((p[i].end_cyl_hi << 8) | p[i].end_cyl),
			p[i].end_sec,
			p[i].part_type,
			p[i].abs_start_sec, p[i].sec_in_part);
	}
	printf("\nRe-computed Partition Table of %s:\n", dev_file);
	printf("  B Start (H/C/S)   End (H/C/S) Type  StartSec    TotSec\n");
	for (i = 0; i < 4; i++)
	{
		printf("%d:%d ", i + 1, !!(p[i].boot_type & 0x80));
		print_computed(p[i].abs_start_sec);
		printf(" ");
		print_computed(p[i].abs_start_sec + p[i].sec_in_part - 1);
		printf(" %02X %10d %9d\n", p[i].part_type,
			p[i].abs_start_sec, p[i].sec_in_part);
	}
	printf("\n");
	return 0;
}

As the above code (part_info.c) is an application, compile it to an executable (./part_info) as follows: gcc part_info.c -o part_info, and then run ./part_info /dev/sda to check out your primary partitioning information on /dev/sda. Figure 24 shows the output of ./part_info on the presenter’s system. Compare it with the fdisk output as shown in figure 23.

Figure 24: Output of ./part_info

Figure 24: Output of ./part_info

Partition types and Boot records

Now as this partition table is hard-coded to have 4 entries, that dictates the maximum number of partitions, we can have, namely 4. These are called primary partitions, each having a type associated with them in their corresponding partition table entry. The various types are typically coined by the various operating system vendors and hence it is a sort of mapping to the various operating systems, for example, DOS, Minix, Linux, Solaris, BSD, FreeBSD, QNX, W95, Novell Netware, etc to be used for/with the particular operating system. However, this is more of ego and formality than a real requirement.

Apart from these, one of the 4 primary partitions can actually be labeled as something referred as an extended partition, and that has a special significance. As name suggests, it is used to further extend the hard disk division, i.e. to have more partitions. These more partitions are referred as logical partitions and are created within the extended partition. Meta data of the logical partitions is maintained in a linked list format, allowing unlimited number of logical partitions, at least theoretically. For that, the first sector of the extended partition, commonly referred to as Boot Record or BR (in short), is used in a similar way as MBR to store (the linked list head of) the partition table for the logical partitions. Subsequent linked list nodes are stored in the first sector of the subsequent logical partitions, referred to as Logical Boot Record or LBR (in short). Each linked list node is a complete 4-entry partition table, though only the first two entries are used – the first for the linked list data, namely, information about the immediate logical partition, and second one as the linked list’s next pointer, pointing to the list of remaining logical partitions.

To compare and understand the primary partitioning details on your system’s hard disk, follow these steps (as root user and hence with care):

  • ./part_info /dev/sda # Displays the partition table on /dev/sda
  • fdisk -l /dev/sda # To display and compare the partition table entries with the above

In case you have multiple hard disks (/dev/sdb, …), or hard disk device file with other names (/dev/hda, …), or an extended partition, you may try ./part_info <device_file_name> on them as well. Trying on an extended partition would give the information about the starting partition table of the logical partitions.

Summing up

“Right now we carefully and selectively played (read only) with our system’s hard disk. Why carefully? As otherwise, we may render our system non-bootable. But no learning is complete without a total exploration. Hence, in our post-lunch session, we would create a dummy disk on RAM and do the destructive explorations.”

Fifteenth Article >>

PDF Creator    Send article as PDF   

USB Drivers in Linux: Data Transfer to & from USB Devices

This thirteenth article, which is part of the series on Linux device drivers, details out the ultimate step of data transfer to and from a USB device using your first USB driver in Linux – a continuation from the previous two articles.

<< Twelfth Article

USB miscellany

Pugs continued, “To answer your question about how a driver selectively registers or skips a particular interface of a USB device, you need to understand the significance of the return value of probe() callback.” Note that the USB core would invoke probe for all the interfaces of a detected device, except the ones which are already registered. So, for the first time, it would call for all. Now, if the probe returns 0, it means the driver has registered for that interface. Returning an error code indicates not registering for it. That’s all. “That was simple”, commented Shweta.

“Now, let’s talk about the ultimate – data transfers to & from a USB device”, continued Pugs. “But before that tell me what is this MODULE_DEVICE_TABLE? This is bothering me since you explained the USB device id table macros”, asked Shweta pausing Pugs. “That’s another trivial stuff. It is mainly for the user-space depmod“, said Pugs. “Module” is another name for a driver, which is dynamically loadable and unloadable. The macro MODULE_DEVICE_TABLE generates two variables in a module’s read only section, which is extracted by depmod and stored in global map files under /lib/modules/<kernel_version>. modules.usbmap and modules.pcimap are two such files for USB & PCI device drivers, respectively. This enables auto-loading of these drivers, as we saw usb-storage driver getting auto-loaded.

USB data transfer

“Time for USB data transfers. Let’s build upon the USB device driver coded in our previous sessions, using the same handy JetFlash pen drive from Transcend with vendor id 0x058f and product id 0×6387.”

USB being a hardware protocol, it forms the usual horizontal layer in the kernel space. And hence for it to provide an interface to user space, it has to connect through one of the vertical layers. As character (driver) vertical is already discussed, it is the current preferred choice for the connection with the USB horizontal, for understanding the complete data transfer flow. Also, we do not need to get a free unreserved character major number, but can use the character major number 180, reserved for USB based character device files. Moreover, to achieve this complete character driver logic with USB horizontal in one go, the following are the APIs declared in <linux/usb.h>:

int usb_register_dev(struct usb_interface *intf,
				struct usb_class_driver *class_driver);
void usb_deregister_dev(struct usb_interface *intf,
				struct usb_class_driver *class_driver);

Usually, we would expect these functions to be invoked in the constructor and the destructor of a module, respectively. However, to achieve the hot-plug-n-play behaviour for the (character) device files corresponding to USB devices, these are instead invoked in the probe and the disconnect callbacks, respectively. First parameter in the above functions is the interface pointer received as the first parameter in both probe and disconnect. Second parameter – struct usb_class_driver needs to be populated with the suggested device file name and the set of device file operations, before invoking usb_register_dev(). For the actual usage, refer to the functions pen_probe() and pen_disconnect() in the code listing of pen_driver.c below.

Moreover, as the file operations (write, read, …) are now provided, that is where exactly we need to do the data transfers to and from the USB device. So, pen_write() and pen_ read() below shows the possible calls to usb_bulk_msg() (prototyped in <linux/usb.h>) to do the transfers over the pen drive’s bulk end points 0×01 and 0×82, respectively. Refer to the ‘E’ lines of the middle section in Figure 19 for the endpoint number listings of our pen drive. Refer to the header file <linux/usb.h> under kernel sources, for the complete list of USB core API prototypes for the other endpoint specific data transfer functions like usb_control_msg(), usb_interrupt_msg(), etc. usb_rcvbulkpipe(), usb_sndbulkpipe(), and many such other macros, also defined in <linux/usb.h>, compute the actual endpoint bitmask to be passed to the various USB core APIs.

Figure 19: USB's proc window snippet

Figure 19: USB’s proc window snippet

Note that a pen drive belongs to a USB mass storage class, which expects a set of SCSI like commands to be transacted over the bulk endpoints. So, a raw read/write as shown in the code listing below may not really do a data transfer as expected, unless the data is appropriately formatted. But still, this summarizes the overall code flow of a USB driver. To get a feel of real working USB data transfer in a simple and elegant way, one would need some kind of custom USB device, something like the one available at eSrijan.

#include <linux/module.h>
#include <linux/kernel.h>
#include <linux/usb.h>

#define MIN(a,b) (((a) <= (b)) ? (a) : (b))
#define BULK_EP_OUT 0x01
#define BULK_EP_IN 0x82
#define MAX_PKT_SIZE 512

static struct usb_device *device;
static struct usb_class_driver class;
static unsigned char bulk_buf[MAX_PKT_SIZE];

static int pen_open(struct inode *i, struct file *f)
{
	return 0;
}
static int pen_close(struct inode *i, struct file *f)
{
	return 0;
}
static ssize_t pen_read(struct file *f, char __user *buf, size_t cnt, loff_t *off)
{
	int retval;
	int read_cnt;

	/* Read the data from the bulk endpoint */
	retval = usb_bulk_msg(device, usb_rcvbulkpipe(device, BULK_EP_IN),
			bulk_buf, MAX_PKT_SIZE, &read_cnt, 5000);
	if (retval)
	{
		printk(KERN_ERR "Bulk message returned %d\n", retval);
		return retval;
	}
	if (copy_to_user(buf, bulk_buf, MIN(cnt, read_cnt)))
	{
		return -EFAULT;
	}

	return MIN(cnt, read_cnt);
}
static ssize_t pen_write(struct file *f, const char __user *buf, size_t cnt,
									loff_t *off)
{
	int retval;
	int wrote_cnt = MIN(cnt, MAX_PKT_SIZE);

	if (copy_from_user(bulk_buf, buf, MIN(cnt, MAX_PKT_SIZE)))
	{
		return -EFAULT;
	}

	/* Write the data into the bulk endpoint */
	retval = usb_bulk_msg(device, usb_sndbulkpipe(device, BULK_EP_OUT),
			bulk_buf, MIN(cnt, MAX_PKT_SIZE), &wrote_cnt, 5000);
	if (retval)
	{
		printk(KERN_ERR "Bulk message returned %d\n", retval);
		return retval;
	}

	return wrote_cnt;
}

static struct file_operations fops =
{
	.open = pen_open,
	.release = pen_close,
	.read = pen_read,
	.write = pen_write,
};

static int pen_probe(struct usb_interface *interface, const struct usb_device_id *id)
{
	int retval;

	device = interface_to_usbdev(interface);

	class.name = "usb/pen%d";
	class.fops = &fops;
	if ((retval = usb_register_dev(interface, &class)) < 0)
	{
		/* Something prevented us from registering this driver */
		err("Not able to get a minor for this device.");
	}
	else
	{
		printk(KERN_INFO "Minor obtained: %d\n", interface->minor);
	}

	return retval;
}

static void pen_disconnect(struct usb_interface *interface)
{
	usb_deregister_dev(interface, &class);
}

/* Table of devices that work with this driver */
static struct usb_device_id pen_table[] =
{
	{ USB_DEVICE(0x058F, 0x6387) },
	{} /* Terminating entry */
};
MODULE_DEVICE_TABLE (usb, pen_table);

static struct usb_driver pen_driver =
{
	.name = "pen_driver",
	.probe = pen_probe,
	.disconnect = pen_disconnect,
	.id_table = pen_table,
};

static int __init pen_init(void)
{
	int result;

	/* Register this driver with the USB subsystem */
	if ((result = usb_register(&pen_driver)))
	{
		err("usb_register failed. Error number %d", result);
	}
	return result;
}

static void __exit pen_exit(void)
{
	/* Deregister this driver with the USB subsystem */
	usb_deregister(&pen_driver);
}

module_init(pen_init);
module_exit(pen_exit);

MODULE_LICENSE("GPL");
MODULE_AUTHOR("Anil Kumar Pugalia <email@sarika-pugs.com>");
MODULE_DESCRIPTION("USB Pen Device Driver");

As a reminder, the usual steps for any Linux device driver may be repeated with the above code, along with the pen drive steps:

  • Build the driver (pen_driver.ko file) by running make.
  • Load the driver using insmod pen_driver.ko.
  • Plug-in the pen drive (after making sure that usb-storage driver is not already loaded).
  • Check for the dynamic creation of /dev/pen0. (0 being the minor number obtained – check dmesg logs for the value on your system)
  • Possibly try some write/read on /dev/pen0. (Though you may mostly get connection timeout and/or broken pipe errors because of non-conformant SCSI commands)
  • Unplug-out the pen drive and look out for gone /dev/pen0.
  • Unload the driver using rmmod pen_driver.

Summing up

Meanwhile, Pugs hooked up his first of its kind creation – the Linux device driver kit (LDDK) into his system to show a live demonstration of the USB data transfers. “A ha! Finally a cool complete working USB driver”, quipped excited Shweta. “Want to have more fun. We could do a block driver over it”, added Pugs. “O! Really”, Shweta asked with a glee on her face. “Yes. But before that we would need to understand the partitioning mechanisms”, commented Pugs.

Fourteenth Article >>

Notes

  1. Make sure that you replace the vendor id & device id in the above code examples by the ones of your pen drive. Also, make sure that the endpoint numbers used in the above code examples match the endpoint numbers of your pen drive. Otherwise, you may get an error like “… bulk message returned error 22 – Invalid argument …”, while reading from pen device.
  2. Also, make sure that the driver from the previous article is unloaded, i.e. pen_info is not loaded. Otherwise, it may give an error message “insmod: error inserting ‘pen_driver.ko’: -1 Device or resource busy”, while doing insmod pen_driver.ko.
  3. One may wonder, as how does the usb-storage get autoloaded. The answer lies in the module autoload rules written down in the file /lib/modules/<kernel_version>/modules.usbmap. If you are an expert, you may comment out the corresponding line, for it to not get autoloaded. And uncomment it back, once you are done with your experiments.
  4. In latest distros, you may not find the detailed description of the USB devices using cat /proc/bus/usb/devices, as the /proc/bus/usb/ itself has been deprecated. You can find the same detailed info using cat /sys/kernel/debug/usb/devices – though you may need root permissions for the same. Also, if you do not see any file under /sys/kernel/debug (even as root), then you may have to first mount the debug filesystem, as follows: mount -t debugfs none /sys/kernel/debug.
PDF Editor    Send article as PDF   

USB Drivers in Linux (Continued)

This twelfth article, which is part of the series on Linux device drivers, gets you further with writing your first USB driver in Linux – a continuation from the previous article.

<< Eleventh Article

Pugs continued, “Let’s build upon the USB device driver coded in our previous session, using the same handy JetFlash pen drive from Transcend with vendor id 0x058f and product id 0×6387. For that, we would dig further into the USB protocol and then convert the learning into code.”

USB endpoints and their types

Depending on the type and attributes of information to be transferred, a USB device may have one or more endpoints, each belonging to one of the following four categories:

  • Control – For transferring control information. Examples include resetting the device, querying information about the device, etc. All USB devices always have the default control endpoint point zero.
  • Interrupt – For small and fast data transfer, typically of up to 8 bytes. Examples include data transfer for serial ports, human interface devices (HIDs) like keyboard, mouse, etc.
  • Bulk – For big but comparatively slow data transfer. A typical example is data transfers for mass storage devices.
  • Isochronous – For big data transfer with bandwidth guarantee, though data integrity may not be guaranteed. Typical practical usage examples include transfers of time-sensitive data like of audio, video, etc.

Additionally, all but control endpoints could be “in” or “out”, indicating its direction of data transfer. “in” indicates data flow from USB device to the host machine and “out” indicates data flow from the host machine to USB device. Technically, an endpoint is identified using a 8-bit number, most significant bit (MSB) of which indicates the direction – 0 meaning out, and 1 meaning in. Control endpoints are bi-directional and the MSB is ignored.

Figure 19: USB's proc window snippet

Figure 19: USB’s proc window snippet

Figure 19 shows a typical snippet of USB device specifications for devices connected on a system. To be specific, the “E: ” lines in the figure shows example of an interrupt endpoint of a UHCI Host Controller and two bulk endpoints of the pen drive under consideration. Also, the endpoint numbers (in hex) are respectively 0×81, 0×01, 0×82. The MSB of the first and third being 1 indicating “in” endpoints, represented by “(I)” in the figure. Second one is an “(O)” for the “out” endpoint. “MaxPS” specifies the maximum packet size, i.e. the data size that can be transferred in a single go. Again as expected, for the interrupt endpoint it is 2 (<= 8), and 64 for the bulk endpoints. “Ivl” specifies the interval in milliseconds to be given between two consecutive data packet transfers for proper transfer and is more significant for the interrupt endpoints.

Decoding a USB device section

As we have just discussed the “E: ” line, it is right time to decode the relevant fields of others as well. In short, these lines in a USB device section gives the complete overview of the USB device as per the USB specifications, as discussed in our previous article. Refer back to Figure 19. The first letter of the first line of every device section is a “T” indicating the position of the device in the USB tree, uniquely identified by the triplet <usb bus number, usb tree level, usb port>. “D” represents the device descriptor containing at least the device version, device class/category, and the number of configurations available for this device. There would be as many “C” lines as the number of configurations, typically one. “C” represents the configuration descriptor containing its index, device attributes in this configuration, maximum power (actually current) the device would draw in this configuration, and the number of interfaces under this configuration. Depending on that there would be at least that many “I” lines. There could be more in case of an interface having alternates, i.e. same interface number but with different properties – a typical scenario for web-cams.

“I” represents the interface descriptor with its index, alternate number, functionality class/category of this interface, driver associated with this interface, and the number of endpoints under this interface. The interface class may or may not be same as that of the device class. And depending on the number of endpoints, there would be as many “E” lines, details of which have already been discussed above. The “*” after the “C” & “I” represents the currently active configuration and interface, respectively. “P” line provides the vendor id, product id, and the product revision. “S” lines are string descriptors showing up some vendor specific descriptive information about the device.

“Peeping into /proc/bus/usb/devices is good to figure out whether a device has been detected or not and possibly to get the first cut overview of the device. But most probably this information would be required in writing the driver for the device as well. So, is there a way to access it using ‘C’ code?”, asked Shweta. “Yes definitely, that’s what I am going to tell you next. Do you remember that as soon as a USB device is plugged into the system, the USB host controller driver populates its information into the generic USB core layer? To be precise, it puts that into a set of structures embedded into one another, exactly as per the USB specifications”, replied Pugs. The following are the exact data structures defined in <linux/usb.h> – ordered here in reverse for flow clarity:

struct usb_device
{
	...
	struct usb_device_descriptor descriptor;
	struct usb_host_config *config, *actconfig;
	...
};
struct usb_host_config
{
	struct usb_config_descriptor desc;
	...
	struct usb_interface *interface[USB_MAXINTERFACES];
	...
};
struct usb_interface
{
	struct usb_host_interface *altsetting /* array */, *cur_altsetting;
	...
};
struct usb_host_interface
{
	struct usb_interface_descriptor desc;
	struct usb_host_endpoint *endpoint /* array */;
	...
};
struct usb_host_endpoint
{
	struct usb_endpoint_descriptor  desc;
	...
};

So, with access to the struct usb_device handle for a specific device, all the USB specific information about the device can be decoded, as shown through the /proc window. But how to get the device handle? In fact, the device handle is not available directly in a driver, rather the per interface handles (pointers to struct usb_interface) are available, as USB drivers are written for device interfaces rather than the device as a whole. Recall that the probe & disconnect callbacks, which are invoked by USB core for every interface of the registered device, have the corresponding interface handle as their first parameter. Refer the prototypes below:

int (*probe)(struct usb_interface *interface, const struct usb_device_id *id);
void (*disconnect)(struct usb_interface *interface);

So with the interface pointer, all information about the corresponding interface can be accessed. And to get the container device handle, the following macro comes to rescue:

struct usb_device device = interface_to_usbdev(interface);

Adding these new learning into the last month’s registration only driver, gets the following code listing (pen_info.c):

#include <linux/module.h>
#include <linux/kernel.h>
#include <linux/usb.h>

static struct usb_device *device;

static int pen_probe(struct usb_interface *interface, const struct usb_device_id *id)
{
	struct usb_host_interface *iface_desc;
	struct usb_endpoint_descriptor *endpoint;
	int i;

	iface_desc = interface->cur_altsetting;
	printk(KERN_INFO "Pen i/f %d now probed: (%04X:%04X)\n",
			iface_desc->desc.bInterfaceNumber,
			id->idVendor, id->idProduct);
	printk(KERN_INFO "ID->bNumEndpoints: %02X\n",
			iface_desc->desc.bNumEndpoints);
	printk(KERN_INFO "ID->bInterfaceClass: %02X\n",
			iface_desc->desc.bInterfaceClass);

	for (i = 0; i < iface_desc->desc.bNumEndpoints; i++)
	{
		endpoint = &iface_desc->endpoint[i].desc;

		printk(KERN_INFO "ED[%d]->bEndpointAddress: 0x%02X\n",
				i, endpoint->bEndpointAddress);
		printk(KERN_INFO "ED[%d]->bmAttributes: 0x%02X\n",
				i, endpoint->bmAttributes);
		printk(KERN_INFO "ED[%d]->wMaxPacketSize: 0x%04X (%d)\n",
				i, endpoint->wMaxPacketSize,
				endpoint->wMaxPacketSize);
	}

	device = interface_to_usbdev(interface);
	return 0;
}

static void pen_disconnect(struct usb_interface *interface)
{
	printk(KERN_INFO "Pen i/f %d now disconnected\n",
			interface->cur_altsetting->desc.bInterfaceNumber);
}

static struct usb_device_id pen_table[] =
{
	{ USB_DEVICE(0x058F, 0x6387) },
	{} /* Terminating entry */
};
MODULE_DEVICE_TABLE (usb, pen_table);

static struct usb_driver pen_driver =
{
	.name = "pen_info",
	.probe = pen_probe,
	.disconnect = pen_disconnect,
	.id_table = pen_table,
};

static int __init pen_init(void)
{
	return usb_register(&pen_driver);
}

static void __exit pen_exit(void)
{
	usb_deregister(&pen_driver);
}

module_init(pen_init);
module_exit(pen_exit);

MODULE_LICENSE("GPL");
MODULE_AUTHOR("Anil Kumar Pugalia <email@sarika-pugs.com>");
MODULE_DESCRIPTION("USB Pen Info Driver");

Then, the usual steps for any Linux device driver may be repeated, along with the pen drive steps:

  • Build the driver (pen_info.ko file) by running make.
  • Load the driver using insmod pen_info.ko.
  • Plug-in the pen drive (after making sure that usb-storage driver is not already loaded).
  • Unplug-out the pen drive.
  • Check the output of dmesg for the logs.
  • Unload the driver using rmmod pen_info.

Figure 22 shows a snippet of the above steps on Pugs’ system. Remember to ensure (in the output of cat /proc/bus/usb/devices) that the usual usb-storage driver is not the one associated with the pen drive interface, rather it should be the pen_info driver.

Figure 22: Output of dmesg

Figure 22: Output of dmesg

Summing up

Before taking another break, Pugs shared two of the many mechanisms for a driver to specify its device to the USB core, using the struct usb_device_id table. First one is by specifying the <vendor id, product id> pair using the USB_DEVICE() macro (as done above), and the second one is by specifying the device class/category using the USB_DEVICE_INFO() macro. In fact, many more macros are available in <linux/usb.h> for various combinations. Moreover, multiple of these macros could be specified in the usb_device_id table (terminated by a null entry), for matching with any one of the criteria, enabling to write a single driver for possibly many devices.

“Earlier you mentioned writing multiple drivers for a single device, as well. Basically, how do we selectively register or not register a particular interface of a USB device?”, queried Shweta. “Sure. That’s next in line of our discussion, along with the ultimate task in any device driver – the data transfer mechanisms” replied Pugs.

Thirteenth Article >>

Notes

  1. Make sure that you replace the vendor id & device id in the above code examples by the ones of your pen drive.
  2. One may wonder, as how does the usb-storage get autoloaded. The answer lies in the module autoload rules written down in the file /lib/modules/<kernel_version>/modules.usbmap. If you are an expert, you may comment out the corresponding line, for it to not get autoloaded. And uncomment it back, once you are done with your experiments.
  3. In latest distros, you may not find the detailed description of the USB devices using cat /proc/bus/usb/devices, as the /proc/bus/usb/ itself has been deprecated. You can find the same detailed info using cat /sys/kernel/debug/usb/devices – though you may need root permissions for the same. Also, if you do not see any file under /sys/kernel/debug (even as root), then you may have to first mount the debug filesystem, as follows: mount -t debugfs none /sys/kernel/debug
Edit PDF    Send article as PDF   

USB Drivers in Linux

This eleventh article, which is part of the series on Linux device drivers, gets you started with writing your first USB driver in Linux.

<< Tenth Article

Pugs’ pen drive was the device, Shweta was playing with, when both of them sat down to explore the world of USB drivers in Linux. The fastest way to get hang of one, the usual Pugs’ way, was to pick up a USB device and write a driver for it to experiment with. So, they chose pen drive aka USB stick, available at hand. It was JetFlash from Transcend with vendor ID 0x058f and product ID 0×6387.

USB device detection in Linux

Whether a driver of a USB device is there or not on a Linux system, a valid USB device would always get detected at the hardware and kernel spaces of a USB-enabled Linux system. A valid USB device is a device designed and detected as per USB protocol specifications. Hardware space detection is done by the USB host controller – typically a native bus device, e.g. a PCI device on x86 systems. The corresponding host controller driver would pick and translate the low-level physical layer information into higher level USB protocol specific information. The USB protocol formatted information about the USB device is then populated into the generic USB core layer (usbcore driver) in the kernel space, thus enabling the detection of a USB device in the kernel space, even without having its specific driver.

After this, it is up to the various drivers, interfaces, and applications (which are dependent on the various Linux distributions), to have the user space view of the detected devices. Figure 17 shows a top to bottom view of USB subsystem in Linux. A basic listing of all detected USB devices can be obtained using the lsusb command, as root. Figure 18 shows the same, without and with the pen drive being inserted into the system. A -v option to lsusb provides detailed information. In many Linux distributions like Mandriva, Fedora, … usbfs driver is loaded as part of the default configuration. This enables the detected USB device details to be viewed in a more techno-friendly way through the /proc window using cat /proc/bus/usb/devices. Figure 19 shows a typical snippet of the same, clipped around the pen drive specific section. The complete listing basically contains such sections, each for one of the valid USB devices detected on the Linux system.

Figure 17: USB subsystem in Linux

Figure 17: USB subsystem in Linux

Figure 18: Output of lsusb

Figure 18: Output of lsusb

Figure 19: USB's proc window snippet

Figure 19: USB’s proc window snippet

Decoding a USB device section

To further decode these sections, a valid USB device needs to be understood first. All valid USB devices contain one or more configurations. A configuration of a USB device is like a profile, where the default one is the commonly used one. As such, Linux supports only one configuration per device – the default one. For every configuration, the device may have one or more interfaces. An interface corresponds to the functionality provided by the device. There would be as many interfaces as the number of independent functionalities provided by the device. So, say an MFD (multi-function device) USB printer can do printing, scanning, and faxing, then it most likely would have at least three interfaces – one for each of the functionalities. So, unlike other device drivers, a USB device driver is typically associated/written per interface, rather than the device as a whole – meaning one USB device may have multiple device drivers. Though definitely one interface can have a maximum of one driver only.

Moreover, it is okay and common to have a single USB device driver for all the interfaces of a USB device. The “Driver=…” entry in the proc window output (Figure 19) shows the interface is to driver mapping – a “(none)” indicating no associated driver. For every interface, there would be one or more end points. An endpoint is like a pipe for transferring information either into or from the interface of the device, depending on the functionality. Depending on the type of information, the endpoints have four types:

  • Control
  • Interrupt
  • Bulk
  • Isochronous

As per USB protocol specification, all valid USB devices have an implicit special control endpoint zero, the only bi-directional endpoint. Figure 20 shows the complete pictorial representation of a valid USB device, based on the above explanation.

Coming back to the USB device sections (Figure 19), the first letter on each line represents the various parts of the USB device specification just explained. For example, D for device, C for configuration, I for interface, E for endpoint, etc. Details about them and various others are available under the kernel source Documentation/usb/proc_usb_info.txt

Figure 20: USB device overview

Figure 20: USB device overview

The USB pen drive driver registration

“Seems like there are so many things to know about the USB protocol to be able to write the first USB driver itself – device configuration, interfaces, transfer pipes, their four types, and so many other symbols like T, B, S, … under a USB device specification”, sighed Shweta. “Yes, but don’t you worry – all these can be talked in detail, later. Let’s do the first thing first – get our pen drive’s interface associated with our USB device driver (pen_register.ko)”, consoled Pugs. Like any other Linux device driver, here also we need the constructor and the destructor – basically the same driver template that has been used for all the drivers. Though the content would vary as this is a hardware protocol layer driver, i.e. a horizontal driver unlike a character driver, which was one of the vertical drivers, discussed so far. And the difference would be that instead of registering with & unregistering from VFS, here we would do that with the corresponding protocol layer – the USB core in this case; instead of providing a user space interface like a device file, it would get connected with the actual device in the hardware space. The USB core APIs for the same are as follows (prototyped in <linux/usb.h>):

int usb_register(struct usb_driver *driver);
void usb_deregister(struct usb_driver *);

As part of the usb_driver structure, the fields to be provided are the driver’s name, ID table for auto-detecting the particular device and the 2 callback functions to be invoked by USB core during hot-plugging and hot-removal of the device, respectively. Putting it all together, pen_register.c would look like:

#include <linux/module.h>
#include <linux/kernel.h>
#include <linux/usb.h>

static int pen_probe(struct usb_interface *interface, const struct usb_device_id *id)
{
	printk(KERN_INFO "Pen drive (%04X:%04X) plugged\n", id->idVendor,
								id->idProduct);
	return 0;
}

static void pen_disconnect(struct usb_interface *interface)
{
	printk(KERN_INFO "Pen drive removed\n");
}

static struct usb_device_id pen_table[] =
{
	{ USB_DEVICE(0x058F, 0x6387) },
	{} /* Terminating entry */
};
MODULE_DEVICE_TABLE (usb, pen_table);

static struct usb_driver pen_driver =
{
	.name = "pen_driver",
	.id_table = pen_table,
	.probe = pen_probe,
	.disconnect = pen_disconnect,
};

static int __init pen_init(void)
{
	return usb_register(&pen_driver);
}

static void __exit pen_exit(void)
{
	usb_deregister(&pen_driver);
}

module_init(pen_init);
module_exit(pen_exit);

MODULE_LICENSE("GPL");
MODULE_AUTHOR("Anil Kumar Pugalia <email@sarika-pugs.com>");
MODULE_DESCRIPTION("USB Pen Registration Driver");

Then, the usual steps for any Linux device driver may be repeated:

  • Build the driver (.ko file) by running make
  • Load the driver using insmod
  • List the loaded modules using lsmod
  • Unload the driver using rmmod

But surprisingly the results wouldn’t be as expected. Check for dmesg and the proc window to see the various logs and details. Not because USB driver is different from a character driver. But there’s a catch. Figure 19 shows that the pen drive has one interface (numbered 0), which is already associated with the usual usb-storage driver. Now, in order to get our driver associated with that interface, we need to unload the usb-storage driver (i.e. rmmod usb-storage) after plugging in the pen drive, and then load our driver. Once this sequence is followed, the results would be as expected. Figure 21 shows a glimpse of the possible logs and proc window snippet. Repeat hot-plugging in and hot-plugging out the pen drive to observe the probe and disconnect calls in action – but don’t forget unloading the usb-storage driver, every time you plug in the pen driver.

Figure 21: Pen driver in action

Figure 21: Pen driver in action

Summing up

“Finally!!! Something into action.”, relieved Shweta. “But seems like there are so many things around (like the device ID table, probe, disconnect, …), yet to be assimilated and understood to get a complete USB device driver, in place”, she continued. “Yes, you are right. Let’s take them one by one, with breaks”, replied Pugs taking a stretching break.

Twelfth Article >>

Notes

  1. Make sure that you replace the vendor id & device id in the above code examples by the ones of your pen drive.
  2. One may wonder, as how does the usb-storage get autoloaded. The answer lies in the module autoload rules written down in the file /lib/modules/<kernel_version>/modules.usbmap. If you are an expert, you may comment out the corresponding line, for it to not get autoloaded. And uncomment it back, once you are done with your experiments.
  3. In latest distros, you may not find the detailed description of the USB devices using cat /proc/bus/usb/devices, as the /proc/bus/usb/ itself has been deprecated. You can find the same detailed info using cat /sys/kernel/debug/usb/devices – though you may need root permissions for the same. Also, if you do not see any file under /sys/kernel/debug (even as root), then you may have to first mount the debug filesystem, as follows: mount -t debugfs none /sys/kernel/debug
PDF Converter    Send article as PDF   

Kernel Space Debuggers in Linux

This tenth article, which is part of the series on Linux device drivers, talks about kernel space debugging in Linux.

<< Ninth Article

Shweta was back from hospital and relaxing in the library by reading up various books. Since the time she has known the ioctl way of debugging, she has been impatient to know more about debugging in kernel space. The basic curiosity coming from the fact that how and where would one run the kernel space debugger, if there is any. Contrast this with application/user space debugging, where we have the OS running underneath, and a shell or a GUI over it to run the debugger, say for example, the GNU debugger (gdb), the data display debugger (ddd). And viola, she came across this interesting kernel space debugging mechanism using kgdb, provided as part of the kernel itself, since kernel 2.6.26

Debugger challenge in kernel space

As we need some interface to be up, to run a debugger to debug anything, a debugger for debugging the kernel, could be visualized in 2 possible ways:

  1. Put the debugger into the kernel itself. And then the debugger runs from within, accessible through the usual monitor or console. An example of it is kdb. Until kernel 2.6.35, it was not official, meaning its source code was needed to be downloaded as 2 set of patches (one architecture dependent and one architecture independent) from ftp://oss.sgi.com/projects/kdb/download/ and then to be patched with the kernel source – though since kernel 2.6.35, majority of it is part of the kernel source’s official release. In either case, the kdb support needs to be enabled in the kernel source and then the kernel is to be compiled, installed and booted with. The boot screen itself would give the kdb debugging interface.
  2. Put a minimal debugging server into the kernel; a debugger client would then connect to it from a remote host system’s user space over some interface, say serial or ethernet. An example for that is kgdb – the kernel’s gdb server, to be used with gdb (client) from a remote system over either serial or ethernet. Since kernel 2.6.26, its serial interface part has been merged with kernel source’s official release. Though, if interested in its network interface part, it would still need to be patched with one of the releases from http://sourceforge.net/projects/kgdb/. In either case, the next step would be to enable kgdb support in the kernel source and then the kernel is to be compiled, installed and booted with – though this time to be connected with a remote debugger client.

Please note that in both the above cases, the complete kernel source for the kernel to be debugged is needed, unlike in case of building modules, where just headers are sufficient. Here is how to play around with kgdb over serial interface.

Setting up the Linux kernel with kgdb

Pre-requisite: Either kernel source package for the running kernel is installed on your system, or a corresponding kernel source release has been downloaded from http://kernel.org.

First of all, the kernel to be debugged need to have kgdb enabled and built into it. To achieve that, the kernel source has to be configured with CONFIG_KGDB=y. Additionally, for kgdb over serial, CONFIG_KGDB_SERIAL_CONSOLE=y needs to be configured. And CONFIG_DEBUG_INFO is preferred for symbolic data to be built into kernel for making debugging with gdb more meaningful. CONFIG_FRAME_POINTER=y enables frame pointers in the kernel allowing gdb to construct more accurate stack back traces. All these options are available under “Kernel hacking” in the menu obtained by issuing the following commands in the kernel source directory (preferably as root or using sudo):

$ make mrproper # To clean up properly
$ make oldconfig # Configure the kernel same as the current running one
$ make menuconfig # Start the ncurses based menu for further configuration

See the highlighted selections in Figure 15, for how and where would these options be:

  • “KGDB: kernel debugging with remote gdb” → CONFIG_KGDB
    • “KGDB: use kgdb over the serial console” → CONFIG_KGDB_SERIAL_CONSOLE
  • “Compile the kernel with debug info” → CONFIG_DEBUG_INFO
  • “Compile the kernel with frame pointers” → CONFIG_FRAME_POINTER
Figure 15: Configuring kernel with kgdb

Figure 15: Configuring kernel with kgdb

Once configured and saved, the kernel can be built by typing make in the kernel source directory. And then a make install is expected to install it, along with adding an entry for the installed kernel in the grub configuration file. Depending on the distribution, the grub configuration file may be /boot/grub/menu.lst, /etc/grub.cfg, or something similar. Once installed, the kgdb related kernel boot parameters, need to be added to this newly added entry.

Figure 16: grub configuration for kernel with kgdb

Figure 16: grub configuration for kernel with kgdb

Figure 16 highlights the kernel boot parameters added to the newly installed kernel, in the grub‘s configuration file.

kgdboc is for gdb connecting over console and the basic format is: kgdboc=<serial_device>,<baud_rate>
where:
<serial_device> is the serial device file on the system, which would run the kernel to be debugged, for the serial port to be used for debugging
<baud_rate> is the baud rate of the serial port to be used for debugging

kgdbwait enables the booting kernel to wait till a remote gdb (i.e. gdb on another system) connects to it, and this parameter should be passed only after kgdboc.

With this, the system is ready to reboot into this newly built and installed kernel. On reboot, at the grub‘s menu, select to boot from this new kernel, and then it will wait for gdb to connect with it from an another system over the serial port.

All the above snapshots have been with kernel source 2.6.33.14, downloaded from http://kernel.org. And the same should work for at the least any 2.6.3x release of kernel source. Also, the snapshots are captured for kgdb over serial device file /dev/ttyS0, i.e. the first serial port.

Setting up gdb on another system

Pre-requisite: Serial ports of the system to be debugged and the another system to run gdb from, should be connected using a null modem (i.e. a cross over serial) cable.

Here are the gdb commands to get the gdb from the other system connect to the waiting kernel. All these commands have to be given on the gdb prompt, after typing gdb on the shell.

Connecting over serial port /dev/ttyS0 (of the system running gdb) with baud rate 115200 bps:

$ gdb
...
(gdb) file vmlinux
(gdb) set remote interrupt-sequence Ctrl-C
(gdb) set remotebaud 115200
(gdb) target remote /dev/ttyS0
(gdb) continue

In the above commands, vmlinux is the kernel image built with kgdb enabled and needs to be copied into the directory on the system, from where gdb is being executed. Also, the serial device file and its baud rate has to be correctly given as per one’s system setup.

Debugging using gdb with kgdb

After this, it is all like debugging an application from gdb. One may stop execution using Ctrl+C, add break points using b[reak], step execution using s[tep] or n[ext], … – the usual gdb way. For details on how to use gdb, there are enough tutorials available online. In fact, if not comfortable with text-based gdb, the debugging could be made GUI-based, using any of the standard GUI tools over gdb, for example, ddd, Eclipse, etc.

Summing up

By now, Shweta was all excited to try out the kernel space debugging experiment using kgdb. And as she needed two systems to try it out, she decided to go to the Linux device drivers lab. There, she set up the system to be debugged, with the kgdb enabled kernel, and connected it with an another system using a null modem cable. Then on the second system, she executed gdb to remotely connect with and step through the kernel on the first system.

Eleventh Article >>

Notes

  1. Parameter for using kgdb over ethernet is kgdboe. It has the following format: kgdboe=[<this_udp_port>]@<this_ip>/[this_dev],[<remote_udp_port>]@<remote_ip>/[<remote_mac_addr>]

where:
<this_udp_port> is optional and defaults to 6443,
<this_ip> is IP address of the system, which would run the kernel to be debugged,
<this_dev> is optional and defaults to eth0,
<remote_udp_port> is optional and defaults to 6442,
<remote_ip> is IP address of the system, from which gdb would be connecting from,
<remote_mac_addr> is optional and defaults to broadcast.

Here are the gdb commands for connecting to kgdb over network port 6443 to IP address 192.168.1.2:

$ gdb
...
(gdb) file vmlinux
(gdb) set remotebreak 0
(gdb) target remote udp:192.168.1.2:6443
(gdb) continue
PDF24 Creator    Send article as PDF   

I/O Control in Linux

This ninth article, which is part of the series on Linux device drivers, talks about the typical ioctl() implementation and usage in Linux.

<< Eighth Article

“Get me a laptop and tell me about the experiments on the x86-specific hardware interfacing conducted in yesterday’s Linux device drivers’ laboratory session, and also about what’s planned for the next session”, cried Shweta, exasperated at being confined to bed and not being able to attend the classes. “Calm down!!! Don’t worry about that. We’ll help you make up for your classes. But first tell us what happened to you, so suddenly”, asked one of her friends, who had come to visit her in the hospital. “It’s all the fault of those chaats, I had in Rohan’s birthday party. I had such a painful food poisoning that led me here”, blamed Shweta. “How are you feeling now?”, asked Rohan sheepishly. “I’ll be all fine – just tell me all about the fun with hardware, you guys had. I had been waiting to attend that session and all this had to happen, right then”.

Rohan sat down besides Shweta and summarized the session to her, hoping to soothe her. That excited her more and she starting forcing them to tell her about the upcoming sessions, as well. They knew that those would be to do something with hardware, but were unaware of the details. Meanwhile, the doctor comes in and requests everybody to wait outside. That was an opportunity to plan and prepare. And they decided to talk about the most common hardware controlling operation: the ioctl(). Here is how it went.

Introducing an ioctl()

Input-output control (ioctl, in short) is a common operation or system call available with most of the driver categories. It is a “one bill fits all” kind of system call. If there is no other system call, which meets the requirement, then definitely ioctl() is the one to use. Practical examples include volume control for an audio device, display configuration for a video device, reading device registers, … – basically anything to do with any device input / output, or for that matter any device specific operations. In fact, it is even more versatile – need not be tied to any device specific things but any kind of operation. An example includes debugging a driver, say by querying of driver data structures.

Question is – how could all these variety be achieved by a single function prototype. The trick is using its two key parameters: the command and the command’s argument. The command is just some number, representing some operation, defined as per the requirement. The argument is the corresponding parameter for the operation. And then the ioctl() function implementation does a “switch … case” over the command implementing the corresponding functionalities. The following had been its prototype in Linux kernel, for quite some time:

int ioctl(struct inode *i, struct file *f, unsigned int cmd, unsigned long arg);

Though, recently from kernel 2.6.35, it has changed to the following:

long ioctl(struct file *f, unsigned int cmd, unsigned long arg);

If there is a need for more arguments, all of them are put in a structure and a pointer to the structure becomes the ‘one’ command argument. Whether integer or pointer, the argument is taken up as a long integer in kernel space and accordingly type cast and processed.

ioctl() is typically implemented as part of the corresponding driver and then an appropriate function pointer initialized with it, exactly as with other system calls open(), read(), … For example, in character drivers, it is the ioctl or unlocked_ioctl (since kernel 2.6.35) function pointer field in the struct file_operations, which is to be initialized.

Again like other system calls, it can be equivalently invoked from the user space using the ioctl() system call, prototyped in <sys/ioctl.h> as:

int ioctl(int fd, int cmd, ...);

Here, cmd is the same as implemented in the driver’s ioctl() and the variable argument construct (…) is a hack to be able to pass any type of argument (though only one) to the driver’s ioctl(). Other parameters will be ignored.

Note that both the command and command argument type definitions need to be shared across the driver (in kernel space) and the application (in user space). So, these definitions are commonly put into header files for each space.

Querying the driver internal variables

To better understand the boring theory explained above, here’s the code set for the “debugging a driver” example mentioned above. This driver has 3 static global variables status, dignity, ego, which need to be queried and possibly operated from an application. query_ioctl.h defines the corresponding commands and command argument type. Listing follows:

#ifndef QUERY_IOCTL_H

#define QUERY_IOCTL_H

#include <linux/ioctl.h>

typedef struct
{
	int status, dignity, ego;
} query_arg_t;

#define QUERY_GET_VARIABLES _IOR('q', 1, query_arg_t *)
#define QUERY_CLR_VARIABLES _IO('q', 2)

#endif

Using these, the driver’s ioctl() implementation in query_ioctl.c would be:

static int status = 1, dignity = 3, ego = 5;

#if (LINUX_VERSION_CODE < KERNEL_VERSION(2,6,35))
static int my_ioctl(struct inode *i, struct file *f, unsigned int cmd,
	unsigned long arg)
#else
static long my_ioctl(struct file *f, unsigned int cmd, unsigned long arg)
#endif
{
	query_arg_t q;

	switch (cmd)
	{
		case QUERY_GET_VARIABLES:
			q.status = status;
			q.dignity = dignity;
			q.ego = ego;
			if (copy_to_user((query_arg_t *)arg, &q,
				sizeof(query_arg_t)))
			{
				return -EACCES;
			}
			break;
		case QUERY_CLR_VARIABLES:
			status = 0;
			dignity = 0;
			ego = 0;
			break;
		default:
			return -EINVAL;
	}

	return 0;
}

And finally the corresponding invocation functions from the application query_app.c would be as follows:

#include <stdio.h>
#include <sys/ioctl.h>

#include "query_ioctl.h"

void get_vars(int fd)
{
	query_arg_t q;

	if (ioctl(fd, QUERY_GET_VARIABLES, &q) == -1)
	{
		perror("query_apps ioctl get");
	}
	else
	{
		printf("Status : %d\n", q.status);
		printf("Dignity: %d\n", q.dignity);
		printf("Ego	: %d\n", q.ego);
	}
}
void clr_vars(int fd)
{
	if (ioctl(fd, QUERY_CLR_VARIABLES) == -1)
	{
		perror("query_apps ioctl clr");
	}
}

Complete code of the above mentioned three files is included in the folder LDDCode, where the required Makefile is also present. You may download its tar-gzipped file as ldd_code.tgz, untar it and then, do the following to try out:

  • Build the ‘query_ioctl’ driver (query_ioctl.ko file) and the application (query_app file) by running make using the provided Makefile.
  • Load the driver using insmod query_ioctl.ko.
  • With appropriate privileges and command-line arguments, run the application query_app:
    • ./query_app # To display the driver variables
    • ./query_app -c # To clear the driver variables
    • ./query_app -g # To display the driver variables
    • ./query_app -s # To set the driver variables (Not mentioned above)
  • Unload the driver using rmmod query_ioctl.

Defining the ioctl() commands

“Visiting time is over”, came calling the security guard. And all of Shweta’s visitors packed up to leave. Stopping them, Shweta said, “Hey!! Thanks a lot for all this help. I could understand most of this code, including the need for copy_to_user(), as we have learnt earlier. But just a question, what are these _IOR, _IO, etc used in defining the commands in query_ioctl.h. You said we could just use numbers for the same. But you are using all these weird things”. Actually, they are usual numbers only. Just that, now additionally, some useful command related information is also encoded as part of these numbers using these various macros, as per the Portable Operating System Interface (POSIX) standard for ioctl. The standard talks about the 32-bit command numbers being formed of four components embedded into the [31:0] bits:

  1. Direction of command operation [bits 31:30] – read, write, both, or none – filled by the corresponding macro (_IOR, _IOW, _IOWR, _IO)
  2. Size of the command argument [bits 29:16] – computed using sizeof() with the command argument’s type – the third argument to these macros
  3. 8-bit magic number [bits 15:8] – to render the commands unique enough – typically an ASCII character (the first argument to these macros)
  4. Original command number [bits 7:0] – the actual command number (1, 2, 3, …), defined as per our requirement – the second argument to these macros

“Check out the header <asm-generic/ioctl.h> for implementation details”, concluded Rohan while hurrying out of the room with a sigh of relief.

Tenth Article >>

Notes:

  1. The intention behind the POSIX standard of encoding the command is to be able to verify the parameters, direction, etc related to the command, even before it is passed to the driver, say by VFS. It is just that Linux has not yet implemented the verification part.
PDF Printer    Send article as PDF