IDEs: Panacea or Pain in the Rear?

IDEs: Panacea or Pain in the Rear?

IDE? What Gives?

The words IDE and editor are frequently and mistakenly conflated. The differentiating factor for editors and IDEs is functionality. While an editor, such as Notepad++, Emacs, NEdit or VI, enables you to compile and manipulate plain text, an IDE goes much further.
The code insight alone that an IDE brings to the programming process elevates its functionality above that of a simple editor. Some text editors include basic code intel (such as calltips and autocomplete), but none have the refactoring and profiling depth of an IDE.  The ability to interpret and profile the code as it compiles, providing visual cues to identify various classes, functions, and variables on-the-fly is reason enough for many to choose an IDE; however, the rationale for choosing the IDE route goes deeper still.
While your common or garden editor is great for writing code, it does nothing to help you with the bulk of your work, AKA as testing and debugging. IDEs integrate your editor with your compiler and debugger to detect and triage errors and prevent bottlenecks. While this function is not infallible, it is certainly easier and faster than manual debugging, which makes this seemingly never-ending story easier to tackle.
IDEs offer collaboration possibilities that provide the means to share data-flows over encrypted peer-to-peer connections. For developers working in teams or for those occasions when you simply need the advice or reassurance of a trusted colleague or friend (who hasn’t been there!) this is an attractive incentive.
Unclean, Unclean!
While IDE users are a large and loyal group, there are many emphatic detractors, who denounce IDEs and champion a return to more basic manual code development techniques and tools.
Amongst the naysayers are those asserting that IDEs are somehow less than their compiling forebears. Less pure, less real, less attuned to the complexities of the process. These programming puritans advocate wresting control back from the jaws of the IDE. They eschew the luxuries of autocomplete, version control integration, and dependency importing in favor of getting down and dirty to develop your own code and coding processes.
They argue that IDEs are more of a Swiss Army knife than a focussed development tool and that over reliance on visual designers and intellisense features diminish your coding skills and experience. Analyzing and fixing your own syntax and semantic errors, they suggest, strengthen your programming prowess.
Bruce Maxwell, Professor of Computer Science at Colby College, explained his problem with IDE integration:
I’ve watched student after student come to the same conclusion that editing the text file is faster and easier and more precise…It comes down to this: you can create nice, general graphics tools that will let you do a certain set of tasks in a certain design space. But as soon as you have a specific idea or concept or need that falls outside of those tasks or the design space, or you want more detailed control, you fall back to the most general tool of all: a text editor and a programming language.
Conversely, you could argue that only the more confident and adept programmer should use IDE’s. Much like the young mathematics student who must learn to go it alone without the aid of a calculator, the less-experienced programmer should learn to execute their own code and navigate the perils of scripting errors without any buffer — the ‘if you don’t make mistakes, you don’t make anything’ argument.  Perhaps it is only then that they should consider using an IDE.
Others point to the problem of bloat.  IDEs are typically big brutes. They hog memory, are slow to load, and can be so packed with features that they are difficult to navigate. Those who leave their systems in perpetual sleep mode to avoid wait times often complain that their operating systems become slower over time.
Crashes is not uncommon when dealing with complex and powerful IDEs. Minor exceptions in your own code can cause a program to hang and exit, requiring a scrub of metadata and starting from scratch.
There is also a cost element to IDE commitment. Depending on your choice of language, you may have to buy expensive packages that present a steep learning curve, demanding large quantities of your time.
Zen and the Art of Compiler Maintenance
So, are IDEs dumbing down our code? Are we too reliant? Should we return to nature?
Ultimately, whatever your background, level or perceived level, or expertise, IDEs are a personal choice. If you prefer to manipulate your binary and hexadecimal numbers manually, then good for you.
If, on the other hand, you choose to use an IDE because it makes you more productive, gives you peace of mind, frees your creative impulses so you can focus on the bigger picture, then fantastic! No shame in that!
In an industry where change occurs by the zeptosecond (yeah, that’s a thing!), it’s impossible to compete at every level and get in front of every trend. The person next to you may hit on the means to reach your intended goal more quickly and with greater ease. That’s how the cookie crumbles. In the end, It’s all about momentum, whatever keeps us driving forward and gives us the confidence to stay in the game, whether that be IDE, VIM, or going back to nature, make your peace with your choice and push on!

Elementary: The Future of the AI Bug-Detectives

Can AI detect bugs and errors, and prevent security breaches?

Shields down

The Internet of Things is rapidly creating a global ecosystem of computer-human interdependence. While this synergy is driving innovation and advancement in almost every facet of our lives, it exposes us to new and challenging vulnerabilities that must be met by new and fulsome countermeasures.

Security vulnerabilities are growing in lockstep with accelerated software development and application complexity. There is an ever-increasing onus on developers to ensure that they create robust programs that can withstand the advancing threat. With billions of lines of code written every year it is currently impossible to ensure complete infallibility of code — while developers are, by nature a highly focused and meticulous bunch, to err is human and they are still just humans!

Program manager at US military Defence Advanced Research Projects Agency (DARPA), Sandeep Neema, says “What’s concerning and challenging is that the bugs in software are not decreasing,” which is why DARPA spends millions of dollars funding the development of Artificial Intelligence (AI) systems that can detect software flaws. In an era where even the simplest of programming errors can throw open the doors to malicious intrusion, it is not surprising that many businesses are looking closely at the role artificial intelligence (AI) and machine learning (ML) could play in reinforcing cyber defenses.  

Beyond the old reliables

The number of cyber attacks is rising. Security researchers regularly discover new malware as well as advanced malware variants, such as Mylobot. Traditional and legacy antivirus solutions simply cannot compete with advanced threats, such as the recent WannaCry ransomware virus. It is estimated that 50–75% of development time is spent testing, with many errors detected due to firewalls, assertions, code reviews, IDE warnings, varying compilers for different OSes, working on different hardware, and so on. It’s still common for developers to review each other’s code and run tests before launching new programs. Despite the enormous commitment to preventing, detecting, and triaging faulty code, errors still account for 9 out of every 10 instances of cybercrime. With a 2017 Enterprise Risk Index report claiming that only 50% of file-based attacks were submitted to malware repositories, it is clear that the hackers have the upperhand. Using polymorphism and obfuscation, targeted attacks evading overloaded security teams and automation-to-scale, attackers are making it nigh on impossible for traditional solutions to keep pace. It is clearly time to up the ante.

In 2017 Microsoft announced the roll out of a new error and virus detection tool designed to meet the requirements of the current threat landscape. The Microsoft Security Risk Detection tool, formerly Project Springfield, is an advanced, cloud-based fuzzing program that uses AI to root out risks before a program is generally available. John Heasman, senior director of software security at DocuSign where the tool was trialed, lauded its effectiveness saying, “It’s rare that these solutions have such a low rate of false positives,” which traditionally pose a huge problem, taking so long to investigate that security experts risk missing the real bugs as they sort through false ones.
Microsoft’s lead researcher for the project,  David Molnar, says:
We use AI to automate the same reasoning process that you or I would use to find a bug, and we scale it out with the power of the cloud
The Microsoft Security Risk Detection tool is an additional layer of security that supports the work of developers; however, it is not yet a replacement for all other systems of threat management.
Social Media giant Facebook is getting in on the act. Facebook’s Artificial Intelligence Research (FAIR) team have rolled out the automated software testing tool, Sapienz. Sapienz, in conjunction with their Infer static analysis program, uses AI to pinpoint the point of weakness in code before passing the information to SapFix, their new AI-hybrid automatic fix generator.

SapFix can run independently of Sapienz. SapFix reduces debugging hours, speeding up development and roll-out time. So committed is Facebook to advancing their research into AI and ML solutions, they are opening a new AI lab in AI lab in Paris to supplement similar facilities in New York and Silicon Valley. 

AI bug-detection continues its march into gaming territory 

In March 2018, Ubisoft announced their new Commit Assistant tool, which uses AI to flag potentially faulty code before it’s implemented (maybe even before it is written). The developers of Commit Assistant trained their model on almost 10 years of code from their own software libraries, learning from past errors so as to flag them should they reappear. They claim that almost 70% of their annual budget is consumed attending to threats to their programs, therefore this new investment in AI bug-detection could have huge consequences for their bottom line. 

A Chinese-American research group based at the University of Texas has developed an AI bug-detection system trained to prevent Zero-Day attacks. This tool was tested on four widely used commercial software programs and uncovered 10 previously undetected flaws.
As previously mentioned, DARPA is already heavily invested in AI bug-detection. Suresh Jagannathan, DARPA program manager, says their AI Mining and Understanding Software Enclaves (MUSE) program is:
...aiming to treat programs—more precisely, facts about programs—as data, discovering new relationships (enclaves) among this 'big code' to build better, more robust software. 
Central to the MUSE project is the creation of a continuously operational specification-mining engine, which leverages deep program analysis and big data analytics to develop a database of inferences about properties, behaviors, and vulnerabilities within programs. The desired outcome of MUSE is to flip the development process on its head, eliminating or at least vastly reducing the possibility of error in the first instance. 

Shields up?

Big tech companies are gambling big on AI, but traditional companies like healthcare, retail, and telecoms remain hesitant, with very few incorporating AI or ML into their value chains at scale. It seems that despite all the recent investment, the scope of AI deployment is still relatively limited. A recent study of more than 3,000 businesses around the world found that many business leaders are uncertain about return-on-investment from AI expenditure. 

For all the advantages of AI-driven virus detection solutions, they do not come without risk. AI and ML models require large quantities of data to learn from. This is expensive, and with so few experts in this burgeoning field, could make adoption slow. There is also the worry that advanced AI and ML models could fall into the wrong hands and be used to attack the defenses they were designed to defend. Worst-case scenario stuff, yes, but something to consider. 

Current security systems cannot keep pace with intense and frequently automated attacks, like the WannaCry virus, which affected more than 200,000 machines in a matter of hours. Hackers have the advantage of knowing that many of the most widely used security tools, such as AV and Intrusion Detection Systems, are flawed and they know just how to evade them.

Ultimately AI has the potential to make code vetting less labor-intensive and more accurate. ML and AI are making larger inroads in cybersecurity defense systems, but their current prominence is more of a buzzword than a blueprint for effective bug defenses. Widespread adoption of AI-driven virus detection doesn't look likely anytime soon, but it doesn't hurt to dream of a virus-free future!

BGP: The Arthritic Backbone of the Internet

February 2014 was a cold month for Tokyo residents, In the bustling suburb of Shibuya, the Mt. Gox bitcoin exchange felt the chill most acut...