Gopher Success: The ins and outs of Golang


The story goes that Google’s open-source programming language, Go - or Golang - owes its inception to the long wait-time involved in compiling programs. Tired of the wait and realizing that the software built at Google was not always well served by the languages available to them, three programmers at Google, Robert Griesemer, Ken Thompson and Rob Pike, decided to create their own programming language. The resulting programming progeny is Golang, an open-source, systems-level language capable of bundling efficient compilation, efficient execution, and ease of programming within one language structure.
The fastest growing programming language of the year at GitHub and currently ranked 10th in the TIOBE index of the most popular programming languages, Go is making its presence felt amongst the more established programming languages, C++, Python, Java, C#, and so on. Let’s have a look at some of the reasons Go is rapidly becoming a favorite of beleaguered program developers and our own build choice here at BugReplay.

Picking up the garbage - key features of Golang

Sporting a modified Hindley/Milner inference technique, Go is a strongly and statically typed compiled language, much like C++ but with the accessibility of dynamically typed languages like Python. Its syntax is reminiscent of the C family, but with only 25 keywords and, like C, is a value-oriented rather than reference-oriented language.
Static code analysis isn’t new but Go kicks it up a notch. With type safety, Go detects compiler errors before execution, sidestepping the need to check types dynamically while executing results in better performance at runtime. Similarly, as a compiled language, Go parses during the build-stage rather than interpreting on-the-fly. Again, the result is faster runtime performance.
Boasting low-latency garbage collection, which translates to efficient and concurrent automatic memory management, Go is designed to build powerful, large-scale software that can scale to meet hardware requirements now and in the future. Go future-proofs its design using a concurrent, tri-color, mark-sweep algorithm, first proposed by Edsger W. Dijkstra decades ago. This is a marked divergence from the customary enterprise-grade garbage collectors but means that garbage collection, the bane of most programmers professional lives, is never an obstacle to creating scalable software
Google recently expanded Go’s powerful package library with the creation of GoCloud. Along with a generic set of APIs to write simpler, more portable applications, GoCloud offers a set of libraries that makes it even easier for developers to create cloud-based applications with Go. Google’s overall aim here it would seem, is to position Go as the foremost programming language for application development in the cloud.  
Go uses the code analyzing tool, GoDoc, to parse source code and produce beautiful, simplified user documentation that evolves in tandem with the code. And amazingly, GoDoc doesn’t use any extra languages, like JavaDoc, PHPDoc, or JSDoc to annotate constructions in the code, just English.
With an impressive array of features, Go is as popular for what it doesn’t have as what it does. Choosing composition over inheritance, Go decouples intent from definitions, meaning that software scales organically without concern for preset hierarchies. With its innovative interface (set of methods) system, Go eliminates subclasses and type-based inheritance. Whenever a type implements its methods, the interface is implemented implicitly—no explicit declaration of intent is required. Just another nifty feature!

Living in a multicore world: the Golang response

In the past decade processors have gained little in power and speed. The practical clock speed of a single core processor hasn’t changed much since Intel’s Pentium 4 processor reached 3.0 GHz in 2004. Solutions, such as increased cache, adding quad and octa-core CPUs, and hyperthreading have all proved limited. It’s clear that developers can’t rely on hardware improvement alone. Therefore, the primary means of increasing performance is more efficient software.
Most programming languages (Java, Python, and so on) are designed for ’90s single-threaded environments. Most support multi-threading, but concurrent execution, threading-locking, race conditions, and deadlocks complicate multi-thread execution.
As a relatively new language, Go was developed in the era of the multi-processor, with concurrency very much at its core, and using goroutines and channels instead of threads consuming almost 2KB memory. Goroutines are functions that can run concurrently with other methods and functions, while channels are built-in primitives that enable synchronized, safe execution of two or more goroutines at any time. The result is the ability to spin millions of goroutines at any time with a much faster, lighter, and more scalable deployment than Java threads. Simply put, Go helps you to maximize CPU horsepower by making it easy to design programs that run concurrently across multiple cores.

To Go or not to Go?

Already quite a mature package, once you install Go you can begin building production-grade software to cover a wide range of use cases from Rest APIs to encryption software before you need to consider third party packages. Compiling to a single native binary, deploying an application written in Go is a simple matter of copying the application file to the destination server; however, with its strict rules and a somewhat underdeveloped library set, Go may not yet be a Python or JavaScript killer.
A fairly major limitation that many point to is Go’s lack of generics, meaning a decreased level of reuse in your code. Go can also make it difficult to determine with certainty whether a struct has implemented an interface without first attempting to compile the program. Yet another complaint that has been leveled at Go is its approach to error handling, which some find less than fulsome. Experienced programmers may find it difficult to shift mindset from classic, object-oriented languages such as Java and C++ to start thinking of things in the Go way. Here at BugReplay, we’re particularly appreciative of Golang’s development processes. Developers have to be very attentive and accurate to make code neat and safe, which in our opinion makes the mind-shift worth the effort.
It must be borne in mind that Go is in its infancy and many issues that developers are currently unhappy with may well be rectified in the fullness of time. What seems certain is that Golang is going places. With a mounting tide of adoption, Go has been battle-tested on such notable projects as Google’s Kubernetes platform, Dropbox, Malwarebytes, Hootsuite, and Basecamp, to name a few. It is quickly becoming a key component in the development of cloud infrastructure, particularly useful for projects involving distributed networks and other complex back-end technologies. So, if you’re thinking of getting into serverless and cloud infrastructures, now might be time to gopher it!

Putting the Ghost in the Machine: Can Making Software Buggier Make it More Secure?


Software bugs are commonplace, particularly in languages that lack memory safeguards like C and C++. It is easy for programmer errors to result in memory corruption and random code exploitation.

Traditionally, hackers painstakingly trawl through lines of code to discover exploitable programming errors. Any bugs they find must be triaged to determine the level of exploitability. Not all bugs are equal, however, depending on runtime environments and the nature of errors, many bugs may not cause any violation of security goals such as, null pointer dereferences. These bugs may merely cause a program to crash, serious but background microservices are designed to restart programs in such events.

Once the triage phase reveals exploitable errors, hackers develop their exploits, which they deploy back into the code.

This process is laborious and largely manual, but can result in a costly clean-up for the companies involved, as well as potentially causing career-ending repercussions for those who let them slip by unnoticed.
Conventional countermeasures are just as painstaking; hunting through code looking for vulnerabilities that might be exploited, removing them before the code goes public.
Recently though, a team of cybersecurity researchers at New York University (NYU) have begun to advocate a new, military-grade camouflage approach to bug extermination. Their seemingly simple suggestion is to add more bugs to code, lots and lots more bugs!
Wheat from the Chaff - Why introduce non-exploitable bugs?
Existing exploit mitigations like ASLR and CFI up the ante for hackers but typically come with performance penalties and don't always deter more sophisticated attackers.
NYU researchers, Zhenghao Hu, Yu Hu, and Brendan Dolan-Gavitt have added a new dimension to the security through obscurity approach to tackling cybercrime. They recommend that programmers and cybersecurity experts liberally sprinkle non-exploitable bugs throughout their code to confound hackers. These benign bugs—Chaff Bugs—and like the military chaff where they get their name, they are designed to create a multitude of potential targets, wasting hackers time and resources, finding and triaging what are, in reality, harmless bugs.
Bugs intentionally placed throughout code must appear exploitable to triage tools but must do no harm to software functionality. This is easier said than done. Adding bugs and thereby altering lines of code can render software useless or worse, malicious. To obviate this possible outcome, programmers must run code with different inputs and monitor the results as the code progresses; a lengthy and resource-heavy exercise.

Glitch in the Matrix: Is the Chaff Approach a Panacea?

While this new smokescreen technique has obvious advantages, it is still in its infancy and doesn't yet represent a comprehensive antidote to cyber attack. So far, there is no definitive proof that catching and exploiting bugs is actually all that arduous for hackers. It may be possible that hackers are already capable of using rigs to identify decoy bugs and automate the exploitation process.
At present, Chaff Bugs have the significant disadvantage of not being indistinguishable from other bugs. It is conceivable that hackers could identify artifacts in the code and discover patterns to exploit to malicious ends. Hu et al are aware of this limitation and are striving to create bugs that can be camouflaged completely within existing code while making triggering conditions for errors more natural.
Another potential impediment to widespread adoption of the current Chaff Bug technique is that software bugs are the bane of developers professional lives. It would not be surprising if developers were reluctant to work with code riddled with pre-baked bugs that could render non-exploitable errors exploitable.

Conclusion

The Chaff Bug approach is a novel and mouthwateringly malevolent way to give hackers a taste of their own medicine. Gumming up their systems of attack with a myriad of bugs should, in theory, reduce cybercrime figures. But there is still work to be done.
For this approach to become a more potent deterrent, the nature and variation of the injected bugs must be manipulated to ensure comprehensive cohesion with existing code. In addition, bugs must be successfully injected at the binary level for this approach to work in legacy systems. Currently, there is way to employ this technique in open-source software. With open-source software becoming increasingly widespread, this is a serious deficiency.
Despite its flaws, the current technique appears to work well as an add-on in the build, bolstering the efficacy of existing defenses, such as  ASLR, DEP, CFI, and CPI. The academics that created this technique hope their work draws attention to the study of exploit triage. They aim to advance the system so that it can become a powerful means of drowning attackers in a sea of deliberately tainted code.
It’s early days, but it will be interesting to see if developers and cybersecurity experts choose to bug out with this new layer of defense!

BGP: The Arthritic Backbone of the Internet

February 2014 was a cold month for Tokyo residents, In the bustling suburb of Shibuya, the Mt. Gox bitcoin exchange felt the chill most acut...