Tag Archives: struct file_operations

The Semester Project – Part VII: File System in Action

This twenty-fourth article, which is part of the series on Linux device drivers, gets the complete real SIMULA file system module in action, with a real hardware partition on your pen drive.

<< Twenty-third Article

Real SFS in action

Code available from rsfs_in_action_code.tbz2 gets to the final tested implementation of the final semester project of Pugs & Shweta. This contains the following:

  • real_sfs.c – contains the code of earlier real_sfs_minimal.c plus the remaining real SIMULA file system functionalities. Note the file system’s name change from sfs to real_sfs
  • real_sfs_ops.c & real_sfs_ops.h – contains the earlier code plus the additional operations needed for the enhanced real_sfs.c implementations
  • real_sfs_ds.h (almost same file as in the previous article, plus a spin lock added into the real SFS info structure to be used for preventing race conditions in accessing the used_blocks array in the same structure)
  • format_real_sfs.c (same file as in the previous articles) – the real SFS formatter application
  • Makefile – contains the rules for building the driver real_sfs_final.ko using the real_sfs_*.* files, and the format_real_sfs application using the format_real_sfs.c

With all these and earlier details, Shweta completed their project documentation. And so finally, Shweta & Pugs were all set for their final semester project demo, presentation and viva.

The highlights of their demo (on root shell) are as follows:

  • Loading real_sfs_final driver: insmod real_sfs_final.ko
  • Using the previously formatted pen drive partition /dev/sdb1 or Re-formatting it using the format_real_sfs application: ./format_real_sfs /dev/sdb1. Caution: Please check out the complete detailed steps on this from the previous article, before you actually format it
  • Mount the real SFS formatted partition: mount -t real_sfs /dev/sdb1 /mnt
  • And … And what? Browse the mounting filesystem. Use your usual shell commands to operate on the file system: ls, cd, touch, vi, rm, chmod, …

Figure 40 shows the real SIMULA file system in action

Figure 40: The real SIMULA file system module in action

Figure 40: The real SIMULA file system module in action

Realities behind the action

And if you really want to know, what are the additional enhancements Pugs added to the previous article’s code to get to this level, it is basically the following core system calls as part of the remaining 4 out of 5 set of structures of function pointers (in real_sfs.c):

  1. write_inode (under struct super_operations) – sfs_write_inode() basically gets a pointer to an inode in the VFS’ inode cache, and is expected to sync that with the inode in physical hardware space file system. That is achieved by calling the appropriately modified sfs_update() (defined in real_sfs_ops.c) (adapted from the earlier browse_real_sfs application). The key parameter changes being passing the inode number instead of the filename and the actual timestamp instead of the flag for its update status. And accordingly, the call to filename based sfs_lookup() is being replaced by inode number based sfs_get_file_entry() (defined in real_sfs_ops.c), and additionally now the data blocks are also being freed (using sfs_put_data_block() (defined in real_sfs_ops.c)), if the file size has reduced. Note that sfs_put_data_block() (defined in real_sfs_ops.c) is a transformation of the put_data_block() from the browse_real_sfs application. Also, note the interesting mapping to / from the VFS inode number from / to our zero-based file entry indices, using the macros S2V_INODE_NUM() / V2S_INODE_NUM() in real_sfs_ops.h.
    And finally, underlying write is being achieved using write_to_real_sfs(), a function added in real_sfs_ops.c, very similar to read_from_real_sfs() (already there in real_sfs_ops.c), except the direction reversal of the data transfer and marking the buffer dirty to be synced up with the physical content. Alongwith, in real_sfs_ops.c, two wrapper functions read_entry_from_real_sfs() (using read_from_real_sfs()) and write_entry_to_real_sfs() (using write_to_real_sfs()) have been written and used respectively for the specific requirements of reading and writing the file entries, to increase the code readability. sfs_write_inode() and sfs_update() are shown in the code snippet below. sfs_write_inode() has been written in the file real_sfs.c. For others, check out the file real_sfs_ops.c
#if (LINUX_VERSION_CODE < KERNEL_VERSION(2,6,34))
static int sfs_write_inode(struct inode *inode, int do_sync)
#else
static int sfs_write_inode(struct inode *inode, struct writeback_control *wbc)
#endif
{
	sfs_info_t *info = (sfs_info_t *)(inode->i_sb->s_fs_info);
	int size, timestamp, perms;

	printk(KERN_INFO "sfs: sfs_write_inode (i_ino = %ld)\n", inode->i_ino);

	if (!(S_ISREG(inode->i_mode))) // Real SFS deals only with regular files
		return 0;

	size = i_size_read(inode);
	timestamp = inode->i_mtime.tv_sec > inode->i_ctime.tv_sec ?
			inode->i_mtime.tv_sec : inode->i_ctime.tv_sec;
	perms = 0;
	perms |= (inode->i_mode & (S_IRUSR | S_IRGRP | S_IROTH)) ? 4 : 0;
	perms |= (inode->i_mode & (S_IWUSR | S_IWGRP | S_IWOTH)) ? 2 : 0;
	perms |= (inode->i_mode & (S_IXUSR | S_IXGRP | S_IXOTH)) ? 1 : 0;

	printk(KERN_INFO "sfs: sfs_write_inode with %d bytes @ %d secs w/ %o\n",
		size, timestamp, perms);

	return sfs_update(info, inode->i_ino, &size, &timestamp, &perms);
}

int sfs_update(sfs_info_t *info, int vfs_ino, int *size, int *timestamp, int *perms)
{
	sfs_file_entry_t fe; 
	int i;
	int retval;

	if ((retval = sfs_get_file_entry(info, vfs_ino, &fe)) < 0) 
	{   
		return retval; 
	}   
	if (size) fe.size = *size;
	if (timestamp) fe.timestamp = *timestamp;
	if (perms && (*perms <= 07)) fe.perms = *perms;

	for (i = (fe.size + info->sb.block_size - 1) / info->sb.block_size;
		i < SIMULA_FS_DATA_BLOCK_CNT; i++)
	{   
		if (fe.blocks[i])
		{   
			sfs_put_data_block(info, fe.blocks[i]);
			fe.blocks[i] = 0;
		}   
	}   

	return write_entry_to_real_sfs(info, V2S_INODE_NUM(vfs_ino), &fe);
}
  1. create, unlink, lookup (under struct inode_operations) – All the 3 functions sfs_inode_create(), sfs_inode_unlink(), sfs_inode_lookup() have the 2 common parameters (the parent’s inode pointer and the pointer to the directory entry for the file in consideration), and these respectively create, delete, and lookup an inode corresponding to their directory entry pointed by their parameter, say dentry.
    sfs_inode_lookup() basically searches for the existence of the filename underneath using the appropriately modified sfs_lookup() (defined in real_sfs_ops.c) (adapted from the earlier browse_real_sfs application – the adoption being replacing the user space lseek()+read() combo by the read_entry_from_real_sfs()). If filename is not found, then it invokes the generic kernel function d_splice_alias() to create a new inode entry in the underlying file system, for the same, and then attaches it to the directory entry pointed by dentry. Otherwise, it just attaches the inode from VFS’ inode cache (using generic kernel function d_add()). This inode, if obtained fresh (I_NEW), needs to be filled in with the real SFS looked up file attributes. In all the above implementations and in those to come, a few basic assumptions have been made, namely:

    • Real SFS maintains mode only for user and that is mapped to all 3 of user, group, other of the VFS inode
    • Real SFS maintains only one timestamp and that is mapped to all 3 of created, modified, accessed times of the VFS inode.

    sfs_inode_create() and sfs_inode_unlink() correspondingly invokes the transformed sfs_create() and sfs_remove() (defined in real_sfs_ops.c) (adapted from the earlier browse_real_sfs application), for respectively creating and clearing the inode entries at the underlying hardware space file system, apart from the usual inode cache operations, using new_inode()+insert_inode_locked(), d_instantiate() & inode_dec_link_count(), instead of the earlier learnt iget_locked(), d_add(). Apart from the permissions and file entry parameters, and replacing lseek()+read() combo by read_entry_from_real_sfs(), sfs_create() has an interesting transformation from user space to kernel space: time(NULL) to get_seconds(). And in both of sfs_create() and sfs_remove() the user space lseek()+write() combo has been replaced by the obvious write_entry_to_real_sfs(). Check out all the above mentioned code pieces in the files real_sfs.c and real_sfs_ops.c, as mentioned.

  2. readpage, write_begin, writepage, write_end (under struct address_space_operations) – All these address space operations are basically to read and write blocks on the underlying filesystem, and are achieved using the respective generic kernel functions mpage_readpage(), block_write_begin(), block_write_full_page(), generic_write_end(). First one is prototyped in <linux/mpage.h> and remaining 3 in <linux/buffer_head.h>. Now, though these functions are generic enough, a little thought would show that the first three of these would ultimately have to do a real SFS specific transaction with the underlying block device (the hardware partition), using the corresponding block layer APIs. And that exactly is achieved by the real SFS specific function sfs_get_block(), which is being passed into and used by the first three functions, mentioned above.
    sfs_get_block() (defined in real_sfs.c) is invoked to read a particular block number (iblock) of a file (denoted by an inode), into a buffer head (bh_result), optionally fetching (allocating) a new block. So for that, the block array of corresponding real SFS inode is looked up into and then the corresponding block of the physical partition is fetched using the kernel API map_bh(). Again note that to fetch a new block, we invoke the sfs_get_data_block() (defined in real_sfs_ops.c), which is again a transformation of the get_data_block() from the browse_real_sfs application. Also, in case of a new block allocation, the real SFS inode is also updated underneath, using sfs_update_file_entry() – a one liner implementation in real_sfs_ops.c. Code snippet below shows the sfs_get_block() implementation.
static int sfs_get_block(struct inode *inode, sector_t iblock,
				struct buffer_head *bh_result, int create)
{
	struct super_block *sb = inode->i_sb;
	sfs_info_t *info = (sfs_info_t *)(sb->s_fs_info);
	sfs_file_entry_t fe;
	sector_t phys;
	int retval;

	printk(KERN_INFO "sfs: sfs_get_block called for I: %ld, B: %llu, C: %d\n",
		inode->i_ino, (unsigned long long)(iblock), create);

	if (iblock >= SIMULA_FS_DATA_BLOCK_CNT)
	{
		return -ENOSPC;
	}
	if ((retval = sfs_get_file_entry(info, inode->i_ino, &fe)) < 0)
	{
		return retval;
	}
	if (!fe.blocks[iblock])
	{
		if (!create)
		{
			return -EIO;
		}
		else
		{
			if ((fe.blocks[iblock] = sfs_get_data_block(info)) ==
				INV_BLOCK)
			{
				return -ENOSPC;
			}
			if ((retval = sfs_update_file_entry(info, inode->i_ino, &fe))
				< 0) 
			{   
				return retval;
			}
		}
	}
	phys = fe.blocks[iblock];
	map_bh(bh_result, sb, phys);

	return 0;
}
  1. open, release, read, write, aio_read/read_iter (since kernel v3.16), aio_write/write_iter (since kernel v3.16), fsync (under a regular file’s struct file_operations) – All these operations should basically go through the buffer cache, i.e. should be implemented using the address space operations. And this being a common requirement, the kernel provides a generic set of kernel APIs, namely generic_file_open(), do_sync_read()/new_sync_read() (since kernel v3.16), do_sync_write()/new_sync_write() (since kernel v3.16), generic_file_aio_read()/generic_file_read_iter() (since kernel v3.16), generic_file_aio_write()/generic_file_write_iter() (since kernel v3.16), simple_sync_file()/noop_fsync() (since kernel v2.6.35). Moreover, the address space operations read & write are no longer required to be given since kernel v4.1. Note that there is no API for release, as it is a ‘return 0‘ API. Check out the real_sfs.c file for the exact definition of struct file_operations sfs_fops.
  2. readdir/iterate (since kernel v3.11) (under a directory’s struct file_operations) – sfs_readdir()/sfs_iterate() primarily reads the directory entries of an underlying directory (denoted by file), and fills it up into the VFS directory entry cache (pointed by dirent parameter) using the parameter function filldir, or into the directory context (pointed by ctx parameter) (since kernel v3.11).
    As real SFS has only one directory, the top one, this function basically fills up the directory entry cache with directory entries for all the files in the underlying file system, using the transformed sfs_list() (defined in real_sfs_ops.c), adapted from the browse_real_sfs application. Check out the real_sfs.c file for the complete sfs_readdir()/sfs_iterate() implementation, which starts with filling directory entries for ‘.‘ (current directory) and ‘..‘ (parent directory) using parameter function filldir(), or generic kernel function dir_emit_dots() (since kernel v3.11), and then for all the files of the real SFS, using sfs_list().
  3. put_super (under struct super_operations) – The previous custom implementation sfs_kill_sb() (pointed by kill_sb) has been enhanced by separating it into the custom part being put into sfs_put_super() (and now pointed by put_super), and the standard kill_block_super() being directly pointed by kill_sb. Check out the file real_sfs.c for all these changes.

With all these in place, one could see the amazing demo by Pugs in action, as shown in Figure 40. And don’t forget watching the live log in /var/log/messages using a ‘tail -f /var/log/messages‘, matching it with every command you issue on the mounted real SFS file system. This would give you the best insight into when does which system call gets called. Or, in other words which application invokes which system call from the file system front. For tracing all the system calls invoked by an application/command, use strace with the command, e.g. type ‘strace ls‘ instead of just ‘ls‘.

Notes

  1. On distros like Ubuntu, you may find the log under /var/log/syslog instead of /var/log/messages
   Send article as PDF   

The Semester Project – Part V: File System Module Template

This twenty-second article, which is part of the series on Linux device drivers, lays out a bare bone file system module.

<< Twenty-first Article

With the formatting of the pen drive, the file system is all set in the hardware space. Now, it is the turn to decode that using a corresponding file system module in the kernel space, and accordingly provide the user space file system interface, for it to be browsed like any other file systems.

The 5 sets of System Calls

Unlike character or block drivers, the file system drivers involve not just one structure of function pointers, but instead 5 structures of function pointers, for the various interfaces, provided by a file system. These are:

  • struct file_system_type – contains functions to operate on the super block
  • struct super_operations – contains functions to operate on the inodes
  • struct inode_operations – contains functions to operate on the directory entries
  • struct file_operations – contains functions to operate on the file data (through page cache)
  • struct address_space_operations – contains page cache operations for the file data

With these, there were many new terms for Pugs. He referred the following glossary to understand the various terms used above and later in the file system module development:

  • Page cache or Buffer cache: Pool of RAM buffers, each of page size (typically 4096 bytes). These buffers are used as the cache for the file data read from the underlying hardware, thus increasing the performance of file operations
  • Inode: Structure containing the meta data / information of a file, like permissions, owner, etc. Though file name is a meta data of a file, for better space utilization, in typical Linux file systems, it is not kept in inode, instead in something called directory entries. Collection of inodes, is called an inode table
  • Directory entry: Structure containing the name and inode number of a file or directory. In typical Linux based file systems, a collection of directory entries for the immediate files and directories of say directory D, is stored in the data blocks of the directory D
  • Super block: Structure containing the information about the various data structures of the file systems, like the inode tables, … Basically the meta meta data, i.e. meta data for the meta data
  • Virtual File System (VFS): Conceptual file system layer interfacing the kernel space to user space in an abstract manner, showing “everything” as a file, and translating their operations from user to the appropriate entity in the kernel space

Each one of the above five structures contains a list of function pointers, which needs to be populated depending on what all features are there or to be supported in the file system (module). For example, struct file_system_type may contain system calls for mounting and unmounting a file system, basically operating on its super block; struct super_operations may contain inode read/write system calls; struct inode_operations may contain function to lookup directory entries; struct file_operations may generically operate on the page cached file data, which may in turn invoke page cache operations, defined in the struct address_space_operations. For these various operations, most of these functions will then interface with the corresponding underlying block device driver to ultimately operate with the formatted file system in the hardware space.

To start with Pugs laid out the complete framework of his real SFS module, but with minimal functionality, good enough to compile, load, and not crash the kernel. He populated only the first of these five structures – the struct file_system_type; and left all the others empty. Here’s the exact code of the structure definitions:

#include <linux/fs.h> /* For system calls, structures, ... */

static struct file_system_type sfs;
static struct super_operations sfs_sops;
static struct inode_operations sfs_iops;
static struct file_operations sfs_fops;
static struct address_space_operations sfs_aops;
#include <linux/version.h> /* For LINUX_VERSION_CODE & KERNEL_VERSION */

static struct file_system_type sfs =
{
	name: "sfs", /* Name of our file system */
#if (LINUX_VERSION_CODE < KERNEL_VERSION(2,6,38))
	get_sb:  sfs_get_sb,
#else
	mount:  sfs_mount,
#endif
	kill_sb: kill_block_super,
	owner: THIS_MODULE
};

Note that before Linux kernel version 2.6.38, the mount function pointer was referred as get_sb, and also, it used to have slightly different parameters. And hence, the above #if for it to be compatible at least across 2.6.3x and possibly with 3.x kernel versions – no guarantee for others. Accordingly, the corresponding functions sfs_get_sb() and sfs_mount(), are also #if’d, as follows:

#include <linux/kernel.h> /* For printk, ... */

if (LINUX_VERSION_CODE < KERNEL_VERSION(2,6,38))
static int sfs_get_sb(struct file_system_type *fs_type, int flags,
			const char *devname, void *data, struct vfsmount *vm)
{
	printk(KERN_INFO "sfs: devname = %s\n", devname);

	/* sfs_fill_super this will be called to fill the super block */
	return get_sb_bdev(fs_type, flags, devname, data, &sfs_fill_super, vm);
}
#else
static struct dentry *sfs_mount(struct file_system_type *fs_type,
					int flags, const char *devname, void *data)
{
	printk(KERN_INFO "sfs: devname = %s\n", devname);

	/* sfs_fill_super this will be called to fill the super block */
	return mount_bdev(fs_type, flags, devname, data, &sfs_fill_super);
}
#endif

The only difference in the above 2 functions is that in the later, the VFS mount point related structure has been removed. The printk() in there would display the underlying partition’s device file which the user is going to mount, basically the pen drive’s SFS formatted partition. get_sb_bdev() and mount_bdev() are generic block device mount functions for the respective kernel versions, defined in fs/super.c and prototyped in <linux/fs.h>. Pugs also used them, as most other file system writers do. Are you wondering: Does all file system mount a block device, the same way? Most of it yes, except the part where the mount operation needs to fill in the VFS’ super block structure (struct super_block), as per the super block of the underlying file system – obviously that most probably would be different. But then how does it do that? Observe carefully, in the above functions, apart from passing all the parameters as is, there is an additional parameter sfs_fill_super, and that is Pugs’ custom function to fill the VFS’ super block, as per the SFS file system.

Unlike the mount function pointer, the unmount function pointer has been same (kill_sb) for quite some kernel versions; and in unmounting, there is not even the minimal distinction required across different file systems. So, the generic block device unmount function kill_block_super() has been used directly as the function pointer.

In sfs_fill_super(), Pugs is ideally supposed to read the super block from the underlying hardware-space SFS, and then accordingly translate and fill that into VFS’ super block to enable VFS to provide the user space file system interface. But he is yet to figure that out, as how to read from the underlying block device, in the kernel space. Information of which block device to use, is already embedded into the super_block structure itself, obtained from the user issuing the mount command. But as Pugs decided to get the bare bone real SFS up, first, he went ahead writing this sfs_super_fill() function also as a hard-coded fill function. And with that itself, he registered the Simula file system with the VFS. As any other Linux driver, here’s the file system driver’s constructor and destructor for that:

#include <linux/module.h> /* For module related macros, ... */

static int __init sfs_init(void)
{
	int err;

	err = register_filesystem(&sfs);
	return err;
}

static void __exit sfs_exit(void)
{
	unregister_filesystem(&sfs);
}

module_init(sfs_init);
module_exit(sfs_exit);

Both register_filesystem() and unregister_filesystem() takes pointer to the the struct file_system_type sfs (filled above), as their parameter, to respectively register and unregister the file system described by it.

Hard-coded SFS super block and root inode

And yes, here’s the hard-coded sfs_fill_super() function:

#include "real_sfs_ds.h" /* For SFS related defines, data structures, ... */

static int sfs_fill_super(struct super_block *sb, void *data, int silent)
{
	printk(KERN_INFO "sfs: sfs_fill_super\n");

	sb->s_blocksize = SIMULA_FS_BLOCK_SIZE;
	sb->s_blocksize_bits = SIMULA_FS_BLOCK_SIZE_BITS;
	sb->s_magic = SIMULA_FS_TYPE;
	sb->s_type = &sfs; // file_system_type
	sb->s_op = &sfs_sops; // super block operations

	sfs_root_inode = iget_locked(sb, 1); // obtain an inode from VFS
	if (!sfs_root_inode)
	{
		return -EACCES;
	}
	if (sfs_root_inode->i_state & I_NEW) // allocated fresh now
	{
		printk(KERN_INFO "sfs: Got new root inode, let's fill in\n");
		sfs_root_inode->i_op = &sfs_iops; // inode operations
		sfs_root_inode->i_mode = S_IFDIR | S_IRWXU |
			S_IRGRP | S_IXGRP | S_IROTH | S_IXOTH;
		sfs_root_inode->i_fop = &sfs_fops; // file operations
		sfs_root_inode->i_mapping->a_ops = &sfs_aops; // address operations
		unlock_new_inode(sfs_root_inode);
	}
	else
	{
		printk(KERN_INFO "sfs: Got root inode from inode cache\n");
	}

#if (LINUX_VERSION_CODE < KERNEL_VERSION(3,4,0))
	sb->s_root = d_alloc_root(sfs_root_inode);
#else
	sb->s_root = d_make_root(sfs_root_inode);
#endif
	if (!sb->s_root)
	{
		iget_failed(sfs_root_inode);
		return -ENOMEM;
	}

	return 0;
}

As mentioned earlier, this function is basically supposed to read the underlying SFS super block, and accordingly translate and fill the struct super_block, pointed to by its first parameter sb. So, understanding it is same as understanding the minimal fields of the struct super_block, which are getting filled up. The first three are the block size, its logarithm base 2, and the type/magic code of the Simula file system. As Pugs codes further, we shall see that once he gets the super block from the hardware space, he would instead get these values from that super block, and more importantly verify them, to ensure that the correct partition is being mounted.

After that, the various structure pointers are pointed to their corresponding structure of the function pointers. Last but not least, the root inode’s pointer s_root is pointed to the struct inode structure, obtained from VFS’ inode cache, based on the inode number of root – right now, which has been hard coded to 1 – it may possibly change. If the inode structure is obtained fresh, i.e. for the first time, it is then filled as per the underlying SFS’ root inode’s content. Also, the mode field is being hard-coded to “drwxr-xr-x“. Apart from that, the usual structure pointers are being initialized by the corresponding structure addresses. And finally, the root’s inode is being attached to the super block using d_alloc_root() or d_make_root(), as per the kernel version.

All the above code pieces put in together as the bare bone real_sfs_bb.c, along with the real_sfs_ds.h (based on the same file created earlier), and a supporting Makefile are available from rsfsbb_code.tbz2.

Bare bone SFS module in action

Once compiled using make, getting the real_sfs_bb.ko driver, Pugs did his usual unusual experiments, shown as in Figure 38.

Figure 38: Bare-bone real SFS experiments

Figure 38: Bare-bone real SFS experiments

Pugs’ experiments (Explanation of Figure 38):

  • Checked the kernel window /proc/filesystems for the kernel supported file systems
  • Loaded the real_sfs_bb.ko driver
  • Re-checked the kernel window /proc/filesystems for the kernel supported file systems. Now, it shows sfs listed at the end
  • Did a mount of his pen drive partition /dev/sdb1 onto /mnt using the sfs file system. Checked the dmesg logs on the adjacent window. (Keep in mind, that right now, the sfs_fill_super() is not really reading the partition, and hence not doing any checks. So, it really doesn’t matter as to how the /dev/sdb1 is formatted.) But yes, the mount output shows that it is mounted using the sfs file system

Oops!!! But df output shows “Function not implemented”, cd gives “Not a directory”. Aha!! Pugs haven’t implemented any other functions in any of the other four function pointer structures, yet. So, that’s expected.

Note: The above experiments are using “sudo”. Instead one may get into root shell and do the same without a “sudo”.

Okay, so no kernel crashes, and a bare bone file system in action – Yippee. Ya! Ya! Pugs knows that df, cd, … are not yet functional. For that, he needs to start adding the various system calls in the other (four) function pointer structures to be able to do cool-cool browsing, the same way as is done with all other file systems, using the various shell commands. And yes, Pugs is already onto his task – after all he needs to have a geeky demo for his final semester project.

Twenty-third Article >>

   Send article as PDF