Posted On Jan-17

Superfast parallel snmpbulkwalk

Fastsnmpy is a module that leverages python-extensions that come with net-snmp and provide highly parallelized faster-methods to walk oid-trees on devices.

In addition, it provides a method to bulkwalk mib-trees..

BulkWalk methods are missing from native python-bindings for net-snmp. By creating a wrapper around the GetBulk method instead, and maintaining state while traversing the oid-tree, fastsnmpy provides a clever solution to bulkwalk oids much faster

FastSNMPy provides the following:

  • snmpwalk(): Native python-bindings distributed with net-snmp, combined with fastsnmpy’s ability to parallelize snmpwalk operations.
  • snmpbulkwalk(): Ability to snmpbulkwalk devices, which makes it several magnitudes faster than net-snmp’s implementation of snmpwalk.
    • By leveraging the getbulk method, this module provides a quick snmpbulkwalk utility.
  • PROCESS-POOLS: By passing in a ‘workers=n’ attribute to the above methods, fastsnmpy can instantiate a process-pool to parallelize the snmpwalk and snmpbulkwalk methods, resulting in several devices being walked at the same time, effectively using all cores on a multicore machine.
  • One-Line, and Two-Line scripts that enable you to discover/walk all devices in a whole datacenter

Quick example – Running in interactive mode

Python 2.7.10 (default)

  >> import netsnmp
  >> from fastsnmpy import SnmpSession
  >> hosts =['c7200-2','c7200-1','c2600-1','c2600-2']
  >> oids = ['ifDescr', 'ifIndex', 'ifName', 'ifDescr']
  >> newsession = SnmpSession ( targets = hosts, oidlist = oids, community='oznet' )
  >> results = newsession.snmpbulkwalk(workers=15)
  >> len(results)

Note: To use the module in scripts, please see the included with the package.


(1) Walking 30 nodes for ifDescr using snmpwalk():

time ./
real    0m18.63s
user    0m1.07s
sys     0m0.38s

(2) Walking 30 nodes for ifDescr using snmpbulkwalk():

time ./
real    0m9.17s
user    0m0.48s
sys     0m0.11s

(3) Walking 30 nodes for ifDescr using snmpwalk(workers=10):

time ./
real    0m2.27s
user    0m2.87s
sys     0m0.66s

(4) Walking 30 nodes for ifDescr using snmpbulkwalk(workers=10):

time ./
real    0m0.90s
user    0m2.44s
sys     0m0.40s

As you can see, fastsnmpy’s bulkwalk mode is almost 20 times faster than using python’s native snmp bindings for walking

Latest-version: Fastsnmpy2-1.2.1

  Download Here

Get from Git

Or fork from the git-repo GitHub-FastSNMPy

To thread or not to thread? that is the question!

Posted On Mar-27

Every so often, I see myself pondering over this same question. And sometimes I pen my thoughts and conclusions down on a piece of paper , so that I don’t have to re-think later. This beautiful piece of paper, always seems to get lost among the clutter ( which I call organized stack) of papers  at my desk. So Threads or Processes?

Agreed, it comes down to the language of choice, but these days I use python and perl heavily in my scripts. And the GIL in python, always seems to make multithreading a daunting task on my multi-core machine.

Of course, we all know that multithreading has its advantages over multiprocessing- less footprint, stack size, much lighter , etc etc.. , but what happens with threading  behind the scenes , is what drives early programmers nuts, and make them conclude that threading arose from the dead sea.

The multithreading-multiprocessing debate…

In this particular project that I was working on, the ease of initiating a thread pool, lured me into the path of threads. Also, the fact that my underlying netsnmp c-library, was async, turned out to be an added bonus.

Yes, I started threading at first…

A little tidbit here- the scheduling of threads is actually done by the OS, and this turns out to be a disaster sometimes.

Yes, python relies on the OS to schedule threads. It just releases and re-acquires the GIL. Why not?  After all, the underlying OS is all about multithreading, and the guys who made it, should be experts at scheduling, right? Actually yes, the OS does a damn good job at this.

However, in a multicore machine, the OS is aware of all available cores. And chances are that it schedules the thread on another core…. boom! …. and you just created context-switching, and increased the overall time before the thread actually acquires the GIL. This is invariably what causes multithreading to give poorer than expected results…

So I went back to multiprocessing for this. Yes,  I know it has higher startup costs, significantly more memory…etc. But at the end , there isnt a lot of crazy context switching. And in this particular instance , I had 32 cores.  I was just planning to start the processes once, and then pipe their output in a subprocess stream. They worked fine!

…and then went back to processes

I’m not against threads, and I dont always favor processes either. But often times, we have to decide between the two based on the environment, the hardware, the throughput requirement, the desired speed of execution ( and a little bit of personal choice).  Its called “Using the right tool for the job”

Ajay Divakaran

Spot the difference

Posted On Feb-21

The difference between the two following pieces of code are pretty evident, or are they?Except for the fact that one was almost 60 times faster than the other, over 100,000 iterations.

Figuring out the part that was slowing my co-routine wasn’t easy… Three red-bulls later….

if “key” in d.keys():
    print “Found pattern”
    print “No match”
    print “Found pattern”
    print “No match”