Thought Leadership Archives - HackerRank Blog https://bloghr.wpengine.com/blog/tag/thought-leadership/ Leading the Skills-Based Hiring Revolution Thu, 13 Jun 2024 18:41:27 +0000 en-US hourly 1 https://wordpress.org/?v=6.7.1 https://www.hackerrank.com/blog/wp-content/uploads/hackerrank_cursor_favicon_480px-150x150.png Thought Leadership Archives - HackerRank Blog https://bloghr.wpengine.com/blog/tag/thought-leadership/ 32 32 The 5 Most Resilient Tech Roles in 2024 https://www.hackerrank.com/blog/most-resilient-tech-roles-2024/ https://www.hackerrank.com/blog/most-resilient-tech-roles-2024/#respond Tue, 28 May 2024 12:45:55 +0000 https://www.hackerrank.com/blog/?p=19483 Layoffs.fyi estimates that tech companies laid off over 260,000 employees in 2023. And in the...

The post The 5 Most Resilient Tech Roles in 2024 appeared first on HackerRank Blog.

]]>

Layoffs.fyi estimates that tech companies laid off over 260,000 employees in 2023. And in the first five months of 2024, nearly 85,000 workers were laid off. But the effects of this shift in the tech labor market haven’t been felt evenly across all technical disciplines. A select few roles have proven highly resilient despite the tech industry headwinds.

Let’s see which jobs continue to thrive and why they are crucial in the ever-changing tech industry.

Understanding Resilience in Tech Roles

Resilient roles continue to be in high demand despite fluctuations in the job market. These roles adapt to changes, maintain their importance, and often see increased demand. As we’ll see, data engineering is an example of a highly resilient discipline, with demand for data engineering roles rising by 102% from their 2022 highs.

On the flip side, roles that are not resilient can struggle to maintain their demand, often seeing dips in hiring or even mass layoffs. This decline can be attributed to a range of factors, including automation, AI advancements, and changes in business needs. 

For example, demand for mobile engineers has fallen 23% due to AI frameworks (like TensorFlow and PyTorch) simplifying tasks like image recognition, natural language processing, and recommendation systems.

Methodology

Our data comes from our 2024 Developer Skills Report, which combnies survey responses from developers, employers, and recruiters with data from the HackerRank platform.

Our list defines resilience by ranking the roles that demonstrated a consistent or increasing volume of coding test invites between 2022 and 2023.

The 5 Most Resilient Tech Roles

 1. Data Engineer

Data engineers are pivotal members of the data pipeline. They focus primarily on the architecture and optimization of data platforms. Their responsibilities encompass building systems for data ingestion, storage, analysis, visualization, and activation of vast datasets.

Job Responsibilities:

  • Designing and developing scalable data pipelines
  • Ensuring data quality and consistency
  • Collaborating with data scientists to understand data needs
  • Implementing data security measures

Why the role is important:

Data Engineers are essential because they create the backbone for data operations. With businesses increasingly relying on data-driven insights for decision-making, robust data infrastructure is paramount. The growing ubiquity of AI has also bolstered the demand for this skill set, with data engineers proving vital to the sourcing of data for data-hungry AI models.

As such, the demand for data engineering roles has been resilient, with a notable increase in monthly invites by 102% from their 2022 highs. 

 2. Data Analyst

Data analysts interpret data and provide actionable insights. They are crucial in translating raw data into meaningful information to drive strategic decisions.

Job Responsibilities:

  • Analyzing complex datasets to identify trends and patterns
  • Creating visualizations to present data insights
  • Conducting statistical analysis
  • Collaborating with business units to understand their data needs

Why the role is important:

In an era when data is considered the new oil, data analysts refine this resource. Their ability to derive insights from data helps businesses optimize operations, improve customer experiences, and drive innovation, making their role indispensable in any data-centric organization.

 3. Cloud Security & Cybersecurity Engineer

Cloud security and cybersecurity engineers defend organizations against a wide range of digital threats, including data breaches, malware and ransomware attacks, and phishing attempts. They protect sensitive information, prevent operational disruptions, and combat fraudulent activities. This protects sensitive user and corporate data while ensuring the company’s reputation and financial stability.

Job Responsibilities:

  • Designing and implementing security measures
  • Monitoring networks for security breaches
  • Conducting vulnerability assessments
  • Ensuring compliance with security standards

Why the role is important:

Put simply, security skills are indispensable. Cybersecurity maintains customer trust, ensures regulation compliance, and preserves operational continuity. By safeguarding data, companies foster customer loyalty and avoid legal penalties while also preventing revenue loss and maintaining productivity.

Organizations avoid costly recovery efforts and regulatory fines by averting data breaches and reducing downtime. Additionally, robust cybersecurity measures diminish the risk of ransomware attacks, eliminating the need for expensive ransom payments and subsequent recovery expenses.

Because of the many essential benefits they provide to both companies and consumers, cybersecurity roles are highly resilient.

 4. Site Reliability Engineer

Site reliability engineers (SREs) are responsible for maintaining the reliability and performance of IT systems. They bridge the gap between development and operations by applying a software engineering approach to IT.

Job Responsibilities:

  • Monitoring system performance and reliability
  • Automating operational tasks
  • Managing incident responses
  • Ensuring system scalability and efficiency

Why the role is important:

SREs are critical in ensuring that digital services are always available and high performing. Their work is essential in minimizing downtime and ensuring users have a seamless experience. The resilience of this role stems from the constant need to keep systems running smoothly, regardless of market conditions.

 5. Machine Learning Engineer

Machine learning engineers design, build, and deploy machine learning models. They work closely with data scientists to develop algorithms to learn and make data predictions.

Job Responsibilities:

  • Designing machine learning algorithms
  • Implementing machine learning models into production
  • Evaluating model performance
  • Collaborating with software engineers to integrate models

Why the role is important:

Machine learning is at the forefront of the most exciting technological innovations, driving advancements in artificial intelligence, predictive analytics, and automation. Machine Learning Engineers are essential for harnessing the power of data to create intelligent systems. The growing adoption of and reliance on AI-driven solutions underscores the importance of – and opportunity for – this role.

The post The 5 Most Resilient Tech Roles in 2024 appeared first on HackerRank Blog.

]]>
https://www.hackerrank.com/blog/most-resilient-tech-roles-2024/feed/ 0
How Akamai Utilizes AI to Eliminate Bias and Improve Tech Hiring https://www.hackerrank.com/blog/how-akamai-combat-bias-and-enhance-hiring/ https://www.hackerrank.com/blog/how-akamai-combat-bias-and-enhance-hiring/#respond Fri, 22 Mar 2024 10:33:11 +0000 https://www.hackerrank.com/blog/?p=19408 In an era where technology continually reshapes how businesses operate, Akamai stands at the forefront...

The post How Akamai Utilizes AI to Eliminate Bias and Improve Tech Hiring appeared first on HackerRank Blog.

]]>
Akamai and HackerRank: How Akamai Leverages AI to Combat Bias and Enhance Hiring

In an era where technology continually reshapes how businesses operate, Akamai stands at the forefront of innovation, particularly in its recruitment processes. Kurian Thomas, Head of Talent Acquisition for Akamai in India, sheds light on how Artificial Intelligence (AI) has become a pivotal tool in enhancing their hiring methodology, specifically by minimizing bias and streamlining candidate evaluation.

Tackling Bias with AI

Historically, recruitment processes have been susceptible to various forms of bias, including educational, gender, experience, and company-related biases. These biases not only hinder diversity but also prevent the hiring of potentially outstanding candidates based on their innate talents and skills. Akamai has turned to AI to address this challenge head-on. AI’s ability to impartially evaluate resumes based on skills and potential has significantly reduced these biases, ensuring a more equitable hiring process.

By leveraging AI algorithms, Akamai has been able to systematically analyze resumes and profiles, focusing on the competencies and capabilities that are most relevant to the roles being filled. This technology sifts through the data with an objective lens, unaffected by the biases that can influence human decision-makers. The result is a selection process that prioritizes merit, skill, and potential, creating a more diverse and inclusive workforce.

Kurian Thomas recognized early-on the transformative potential of AI in redefining the recruitment process. “Our goal was to dismantle the barriers that biases erected in our path to finding the right talent. AI emerged as a potent tool in our arsenal, enabling us to look beyond the conventional markers of a candidate’s worth,” says Kurian. This vision led to the adoption of AI-driven processes designed to evaluate candidates based on their skills and potential rather than their backgrounds or identities.

Enhancing Candidate Experience with HackerRank

In the digital age, creating a seamless and fair recruitment process is crucial, especially for companies like Akamai, where remote work is prevalent. The company’s strategic use of HackerRank exemplifies how technology can be leveraged to not only assess technical skills accurately but also to uphold the integrity of the evaluation process.

HackerRank’s proctoring features are at the forefront of combating malpractices, a common concern in remote hiring scenarios. These features ensure that the candidate’s performance accurately reflects their abilities, thereby fostering a level playing field. This technology has become indispensable for Akamai, particularly when conducting remote interviews, hackathons, and technical tests across colleges. By filtering out malpractices, HackerRank helps Akamai identify genuine talent efficiently and effectively.

Kurian Thomas emphasizes the importance of this approach in maintaining the quality of their hiring process: “The proctoring features of HackerRank have been a game-changer for us. It’s not just about filtering out candidates who try to circumvent the system; it’s about ensuring that every candidate we consider has been evaluated fairly and accurately. This has a profound impact on the candidate experience, as it reassures them that their skills and potential are what truly matter to us. In a way, it democratizes the recruitment process, making it more about merit and less about shortcuts.”

The positive implications of such a system extend beyond just the hiring process. By ensuring a transparent and equitable evaluation, Akamai reinforces its commitment to meritocracy and fairness, values that resonate well with prospective employees. Moreover, this approach significantly enhances candidate experience by providing a clear and honest assessment environment, setting the stage for a healthy and productive employer-employee relationship from the outset.

In summary, the utilization of HackerRank not only streamlines Akamai’s recruitment process but also significantly enhances candidate experience by ensuring fairness and transparency. This strategic adoption of technology underscores the company’s commitment to integrity and equality in its hiring practices, setting a benchmark for the industry.

Measuring Success

The effectiveness of AI in recruitment at Akamai is measured through various metrics, including the efficiency of moving from interview stages to offer stages and the ability to handle a high volume of applications. For example, a single senior software engineer position can attract thousands of applications. AI’s capability to stack rank and calibrate resumes has been invaluable, allowing recruiters to focus on the most relevant candidates, thus saving time and resources.

Conclusion

As Kurian Thomas shared, the journey of integrating AI into Akamai’s recruitment processes has been both challenging and rewarding. The strategic use of AI has significantly reduced biases, improved operational efficiency, and enhanced the candidate experience. This innovative approach not only positions Akamai as a leader in leveraging technology for recruitment but also serves as a model for other organizations to follow.

 

The post How Akamai Utilizes AI to Eliminate Bias and Improve Tech Hiring appeared first on HackerRank Blog.

]]>
https://www.hackerrank.com/blog/how-akamai-combat-bias-and-enhance-hiring/feed/ 0
ChatGPT Easily Fools Traditional Plagiarism Detection https://www.hackerrank.com/blog/chatgpt-easily-fools-traditional-plagiarism-detection/ https://www.hackerrank.com/blog/chatgpt-easily-fools-traditional-plagiarism-detection/#respond Wed, 14 Jun 2023 14:00:27 +0000 https://www.hackerrank.com/blog/?p=18777 25% of technical assessments show signs of plagiarism.  While it’s impossible for companies to fully...

The post ChatGPT Easily Fools Traditional Plagiarism Detection appeared first on HackerRank Blog.

]]>

25% of technical assessments show signs of plagiarism. 

While it’s impossible for companies to fully prevent plagiarism—at least without massively degrading the candidate experience—plagiarism detection is critical to ensuring assessment integrity. It’s important that developers have a fair shot at showcasing their skills, and that hiring teams have confidence in the test results. 

And the standard plagiarism detection method used by, well, everyone, is MOSS code similarity.

MOSS Code Similarity

MOSS (Measure of Software Similarity) is a coding plagiarism detection system developed at Stanford University in the mid-1990s. It operates by analyzing the structural pattern of the code to identify similarity, even when identifiers or comments have been changed, or lines of code rearranged. MOSS is incredibly effective at finding similarities, not just direct matches, and that effectiveness has made it the de facto standard for plagiarism detection. 

That doesn’t mean MOSS is flawless, however. Finding similarity doesn’t necessarily translate to finding plagiarism, and MOSS has a reputation for throwing out false positives, particularly when faced with simpler coding challenges. In our own internal research, we’ve found false positive rates as high as 70%.

AI changes the game

While not perfect, MOSS has been a “good enough” standard for years. Until the advent of generative AI tools like ChatGPT. 

ChatGPT has proven effective at solving easy to medium difficult assessment questions. And with just a bit of prodding, it’s also effective at evading MOSS code similarity. Let’s see it in action:

Step 1: We asked ChatGPT to answer a question and it did so, returning a solution as well as a brief explanation of the rationale. 

ChatGPT prompt to solve a coding question in python

Initial ChatGPT answer to coding question

Step 2: Next, we directly asked ChatGPT to help escape MOSS code similarity check, and it refused.

ChatGPT declining to outright bypass MOSS code similarity

Step 3: However, with some creative prompting, ChatGPT will offer unique approaches. And the way that ChatGPT’s transformer-based model works, it generates distinct answers every time, giving it a huge advantage in bypassing code similarity detection. 

Here are three different prompts and three totally different approaches. Note that ChatGPT transforms many variable names from the initial solution to evade code similarity checks.

Framing the prompt differently easily sidesteps ChatGPT reluctance and yields a unique solution to the problem.

 

ChatGPT changing the answer again to deliver a longer, less efficient coded solution

 

Step 4: The moment of truth! When we submitted the revised answer through plagiarism detection, it passed cleanly. 

Dashboard image showing that ChatGPT-generated answer successful evades detection by MOSS code similarity

What’s the implication? 

Basically, MOSS code similarity checks can be easily bypassed with ChatGPT. 

Time to panic?

If MOSS code similarity can be bypassed, does that mean that technical assessments can no longer be trusted?

It depends. 

On one hand, it’s easier for candidates to bypass the standard plagiarism check that the entire industry has relied upon. So, yes, there is a risk to assessment integrity.

On the other hand, plagiarism detection has always been a compromise between effectiveness and candidate experience. MOSS is not intrusive, but its high false positive rates render it less definitive than it could be. Ultimately, it’s not really detecting plagiarism. It’s detecting patterns in the code that could be plagiarism.

Move over, MOSS

What happens now?

Plagiarism detection gets rethought for the AI era. Expect companies to scramble for better versions of MOSS, more complex questions, different question types, and more to make up the difference. 

At HackerRank, we’ve taken a different approach. While we’re always improving our question library and assessment experience, we’ve completely rethought plagiarism detection. Rather than relying on any single point of analysis like MOSS Code Similarity, we built an AI model that looks at dozens of signals, including aspects of the candidate’s coding behavior. 

Our advanced new AI-powered plagiarism detection system boasts a massive reduction in false positives, and a 93% accuracy rate. In real-world conditions, our system repeatedly detects ChatGPT-generated solutions, even when those results are typed in manually, and even when they easily pass MOSS Code Similarity. 

What happens when the example shown above gets submitted through our new system? It gets flagged for suspicious activity. 

HackerRank dashboard showing suspicion flagged as HIGH

Clicking into that suspicious activity reveals that our model identified the plagiarism due to coding behaviors.

HackerRank Candidate Summary showing suspicious activity flag, as well as providing additional detail below.

What’s more, hiring managers can replay the answer keystroke by keystroke to confirm the suspicious activity. 

HackerRank dashboard showing how AI-powered plagiarism detection correctly flagged this ChatGPT-created answer as suspicious, even when typed in keystroke by keystroke.

There’s nothing even close to it on the market, and what’s more, it’s a learning model, which means it will only get more accurate over time.

Want to learn more about plagiarism detection in the AI era, MOSS Code Similarity vulnerability, and how you can ensure assessment integrity? Let’s chat!

The post ChatGPT Easily Fools Traditional Plagiarism Detection appeared first on HackerRank Blog.

]]>
https://www.hackerrank.com/blog/chatgpt-easily-fools-traditional-plagiarism-detection/feed/ 0
Hiring Surges in July, Despite Economic Concerns https://www.hackerrank.com/blog/tech-hiring-surges-in-july/ https://www.hackerrank.com/blog/tech-hiring-surges-in-july/#respond Thu, 11 Aug 2022 15:44:41 +0000 https://bloghr.wpengine.com/blog/?p=18326 Despite concerns of an economic slowdown, employers across the United States added an impressive 528,000...

The post Hiring Surges in July, Despite Economic Concerns appeared first on HackerRank Blog.

]]>
Hiring Surges in July

Despite concerns of an economic slowdown, employers across the United States added an impressive 528,000 jobs last month as hiring surged and unemployment dipped to 3.5% – a pre-pandemic low. 

That announcement hit news stands late last week. For those who’ve been reading, that number may come as quite the surprise. Take one glance at your LinkedIn in recent months, and you’ve probably been hit with a super-sized serving of doom and gloom. 

With that news in mind, we found ourselves asking: What’s actually going on in the hiring market? What’s the real story? 

As the market maintains its trend of adding jobs, technical roles in particular continue to be highly sought after. That’s true even in cooler hiring markets and across all industries

“When people say things are slowing, I ask, ‘On what data?’” said Ryan Sutton, a district president at Robert Half, in a recent NYT article.

And although some major tech companies are making headlines for hiring freezes and layoffs, tech employment is still at a historic level. In July, the unemployment rate for tech jobs fell to 1.7%, approaching the all-time low of 1.3% set in May 2019.

So, we did some digging into the hiring market dynamics we find ourselves in.

Here are some key takeaways:

1. Software Engineers Still Top the List of In-Demand Jobs 

It’s not just tech unemployment data that’s generating buzz. In its latest quarterly report of in-demand jobs, LinkedIn ranked software engineers as the most in-demand role from April 1 to June 30 of 2022. 

Software engineers were at the top of the list in Q1 of this year as well, and this feat becomes even more impressive when you realize that the list included professions of all types, from beekeeping and delivery driving to registered nurses. 

Going a level deeper, five out of 10 roles in LinkedIn’s report were technical roles, including specialties like JavaScript developers and DevOps engineers.

It’s not all that surprising that technical roles made up half the list when you look at recent events like the pandemic and rise of virtual work. Companies of all stripes, across all sectors, were already headed down a path of digital transformation. The pandemic only accelerated those trends. 

To rapidly transform their organizations, companies needed more technical professionals. Now that people have grown accustomed to these changes, demand for the skilled technical talent who sustain and accelerate their innovation initiatives isn’t going anywhere. 

2. Software Engineers Will Be in High Demand for the Next Decade — and Beyond

Software engineers being in high demand isn’t new. Indeed data shows that demand for software developers has rapidly increased over the last two years, and total software development job postings are up by more than 90% since March 2020. 

Software development job postings might have waned over the last few months, according to that same Indeed data, but companies looking to stay competitive should take this news with a boulder-sized grain of salt. 

The U.S. Bureau of Labor Statistics projects the demand for software developers to increase by 22% from 2020 to 2030. To put that number in perspective, the average growth rate for all professions is only 8%.

These long-term projections for demand of software developers are an important reminder to not get caught up in short-term trends and lose sight of the big picture. A study from McKinsey found that companies that continue to invest heavily in innovation during crises emerge substantially ahead of their peers, maintaining this advantage for years to come. And many companies are centering hiring and investment strategies with those outcomes in mind. 

3. Software Engineers Power the Most Promising Technologies of the Future

Cutting-edge technologies will transform the world in unprecedented and exciting ways. Web3 has the potential to revolutionize the internet. Autonomous vehicles may alter the transportation industry forever. And the metaverse could change the way people view and access huge portions of the world. 

But building the technologies of the future requires the highly sought-after talents of skilled tech professionals.

While tech hiring rates may go through short-term ebbs and flows, developers will continue being in high demand for years to come. And companies will need to continue hiring developers and engineers if they hope to harness the skills they need to innovate. 

After all, the future’s counting on it.

The post Hiring Surges in July, Despite Economic Concerns appeared first on HackerRank Blog.

]]>
https://www.hackerrank.com/blog/tech-hiring-surges-in-july/feed/ 0
What the World Would Look Like Without Grace Hopper https://www.hackerrank.com/blog/what-the-world-would-look-like-without-grace-hopper/ https://www.hackerrank.com/blog/what-the-world-would-look-like-without-grace-hopper/#respond Thu, 09 Dec 2021 18:04:19 +0000 https://blog.hackerrank.com/?p=17864 115 years ago today, a girl was born in New York City. The world would...

The post What the World Would Look Like Without Grace Hopper appeared first on HackerRank Blog.

]]>

115 years ago today, a girl was born in New York City. The world would never be the same.

Grace Brewster Murray Hopper is remembered as a curious and sharp-tongued woman who shattered not one but several glass ceilings. An admiral in the U.S. Navy who became an early pioneer in computing, her contributions to the world of modern programming became the foundation upon which subsequent generations of technologists built the future.

Today, members of the tech industry honor its founding mother in every way we can: AnitaB.org’s annual Grace Hopper Celebration, Google’s eponymously named subsea cable system, Fullstack Academy’s Grace Hopper coding bootcamp, to name but a few. 

In honor of her birthday, we’d like to contribute — in a small way — to remembering the woman who perhaps more than anyone embodies our mission of accelerating the world’s innovation. To get a sense of the immense impact the Queen of Code had on our industry, we’ve imagined what the world of computer programming would look like without Grace Hopper’s contributions. 

(Spoiler alert: it’s not pretty.)

1. We would still be writing programs in machine code.

Grace Hopper had a big idea: What if computer programs could be written in English? Although that thought was initially shot down by her superiors, she pursued it diligently — and in 1952, her team developed the first ever compiler for the A-0 programming language. Though it lacked the bells and whistles of a modern compiler, it served as a stepping stone for them and the development of high-level languages at large.

Her motivation was simple. “It’s much easier for most people to write an English statement than it is to use symbols,” she later recalled in her oral history. A few years after breaking new ground with A-0, Hopper and her team went on to write and implement B-0 — or FLOW-MATIC, as it’s better known. It was the first programming language to use English-like expressions.

The impact her vision had on making computer programming more accessible goes without saying. Today, artificial intelligence has made such vast strides that developers can now almost write an entire computer program using only spoken language.

2. Software would likely be closed source.

In 2020, the demand for programmers who were proficient in COBOL, a 60-year old language, suddenly skyrocketed in governments and banks. In technical circles, it was aptly nicknamed “the language that won’t die.”

This language was developed by Hopper and her broader team, with borrowed features from FLOW-MATIC — and used pieces of code sent to her by fans of her compiler.

This practice of building and upgrading a software openly and with multiple contributors is now known as open-source development, and Hopper was one of the first people to do it. In recent years, the adoption of open-source software in businesses has soared (look no further than popular open-source tools like Firefox or WordPress), and it’s predicted to soon overtake the success of traditional closed-source software.

3. Bugs wouldn’t be called “bugs.” 

On September 9, 1947, Hopper was working on a computer in Harvard University that seemed to be malfunctioning. She decided to poke around the system to find the source of her troubles.  

To her surprise, she found a moth stuck in a relay in the computer. As the story goes, she then taped the insect in her log book, recording the world’s first “computer bug” and coining one of the most popularized terms in the history of computer programming. 

4. Computer manuals might not exist. 

The famed Harvard Mark I computer was the first computer to have a dedicated user manual — clocking in at 561 pages, at that. The team who created this manual consisted of Howard Aiken, who developed the computer, and Grace Hopper.

The comprehensive document was officially titled A manual of Operation for the Automatic Sequence Controlled Calculator, after the IBM-granted name for the machine which played a significant role in World War 2.

Grace’s work for the Mark I computer showed the world how important user manuals for machines as complicated as computers were, and the importance of documentation for hardware and software continues to be emphasized today.

5. The world wouldn’t look like it does today.

Technological innovation has completely transformed the way the world works. Our current reality is fully shaped by the apps on our phones, the social connections we make online, and the labor we perform on or alongside computers.

Hopper’s accomplishments are vast, but the opportunities they unlocked for future innovators are even more far-reaching. A safe argument would posit that we have Hopper to thank — in ways both direct and indirect — for every single way in which we interact with technology in modern life.

Hopper spent a lifetime paving the way for modern computer programming, and to this day serves as an inspiration to women in a male-dominated field. In 1973, she became the first woman to become a Distinguished Fellow of the British Computer Society. In 1991, she was once again the first woman to receive the National Medal of Technology. She was posthumously awarded the Presidential Medal of Freedom in 2016. 

While we don’t know for sure what the world would look like without Hopper’s contributions, what we do know is this: The world will continue to honor the gift that was Grace Hopper, for a long time to come.

The post What the World Would Look Like Without Grace Hopper appeared first on HackerRank Blog.

]]>
https://www.hackerrank.com/blog/what-the-world-would-look-like-without-grace-hopper/feed/ 0
HackerRank.main() Recap: Introducing the First End-To-End Platform for Remote Hiring https://www.hackerrank.com/blog/hackerrank-main-recap-introducing-first-end-to-end-platform-remote-hiring/ https://www.hackerrank.com/blog/hackerrank-main-recap-introducing-first-end-to-end-platform-remote-hiring/#respond Wed, 07 Oct 2020 15:28:24 +0000 https://blog.hackerrank.com/?p=16444 More than 6000 people, ranging from developers to hiring managers to HR professionals, registered for...

The post HackerRank.main() Recap: Introducing the First End-To-End Platform for Remote Hiring appeared first on HackerRank Blog.

]]>
HackerRank Main 2020 On Demand Blog Header

More than 6000 people, ranging from developers to hiring managers to HR professionals, registered for our annual HackerRank.main() virtual event yesterday—making HackerRank.main() 2020 the largest event we’ve ever hosted. 

The two-hour virtual event boasted discussions with thought leaders from ServiceNow, Salesforce, UBS, and Mathworks, and unveiled the fourth pillar within the Developer Skills Platform: Rank

This final pillar allows companies to benchmark candidates against millions of developers so hiring managers feel more confident when hiring remote technical talent—making the Developer Skills Platform a single source of truth for remote, end-to-end tech hiring.

Developer skills platform blog header

 

Watch the virtual event on-demand here or keep reading for highlights & takeaways.

 

Keynote: Skills Mapping for Valid & Fair Tech Hiring

We kicked off the virtual event with a panel discussion led by Dr. Fred Rafilson, Chief I/O Psychologist at HackerRank, and Kesha Williams, Software Engineer and Karthik Gaekwad, Principal Engineer. Together, they discussed standardized skills and how to get the most out of your hiring process. 

Dr. Fred Rafilson, Chief I/O Psychologist at HackerRankKesha Williams, Software Engineer and Karthik Gaekwad, Principal Engineer

Key Takeaways:

  • Pull from your own experience when building a standardized skills rubric.
  • When defining role requirements, work with the stakeholders to understand the expectations of what you might want the role to actually fill.
  • Understanding the needs of the project helps you define the type of role you’re looking for.

The Developer Skills Platform in Action

The moment you’ve all been waiting for… the big reveal of the Developer Skills Platform! We unveiled how to optimize hiring in a remote with a demo of the Developer Skills Platform. 

Start Building Great Teams End-to-End

What is the Developer Skills Platform? 

We’re glad you asked! The Developer Skills Platform is the first end-to-end solution for remote tech hiring

It provides a seamless experience for hiring managers and candidates so the remote interview process is easy, efficient, and effective. Remote work and hiring remote workers is the future of innovation for many companies and embracing an end-to-end solution for remote technical hiring will allow organizations to scale quickly and innovate faster while tapping into a more diverse talent pool. 

The Developer Skills Platform is designed around the four core phases of the hiring process:

  1. Plan: Define the skills required for the role that you are filling from the industry-standard skills directory detailing proficiency levels for 15 in-demand technical roles mapped to 75+ skills. Work with HackerRank engagement experts to clearly define a standard process to assess the necessary skills across each phase of the screen and interview process. 
  2. Screen: Accelerate resume review and enable high-quality candidates to showcase their coding skills with assessments and real-world projects before the interview.
  3. Interview: Conduct real-time, real-world technical interviews from anywhere.
  4. Rank: Identify the best candidates based on assessing the right skills, not pedigree. Compare skills to other candidates for the position, as well as millions of developers worldwide. Continuously improve interviewers by comparing evaluations. Standardized process and scoring to ensure a fair evaluation.

“The launch of the Developer Skills platform is the culmination of a decade’s worth of experience. We started out as an assessment platform and as we’ve grown, we’ve continually iterated on our product to meet the needs of our 2000+ customers,” said Vivek Ravisankar, Co-Founder and CEO of HackerRank. 

“Now that the world has shifted to remote work, it’s imperative that companies embrace an end-to-end approach when hiring technical talent. The Developer Skills Platform allows companies to standardize and scale their hiring processes and is perfectly suited for our remote lives.” 

Tick Tock Talks: End-to-End Technical Hiring, Uncovering Best Practices in a Remote World

ServiceNow

First, we spoke with Nancy DeLeon, Global TA Director at ServiceNow. She discussed the importance of implementing diversity and inclusion efforts as well as how they successfully leveraged HackerRank in reducing the engineering resource needed to hire while ensuring the candidate skillset remained at a high bar.      

ServiceNow Tick Tock Talk

Key Takeaways: 

  • “Treat people beautifully.” Have those tough conversations and host D&I trainings that ignite change not just within your teams, but outwardly across the community.
  • Having an end-to-end platform improves the quality of the tests as well as the candidate experience.
  • Hiring automation in high-volume areas helps you find the quality talent, reduce time to hire, and anticipate attrition in real time.
  • Give candidates the tools they need to succeed. Invest in candidates who may not have passed the first time but who are interested in your brand by providing them with resources to improve and possibly get hired in the future. 

Mathworks

Next, we spoke with Vipresh Gangwhal, Engineering Development Group Manager at MathWorks, about how his team rapidly transitioned from nearly no one working remotely, to 100% remote hiring using HackerRank. 

Mathworks Tick Tock Talks

Key Takeaways: 

  • Having the right technology in place makes it easier to pivot in times of crisis.
  • Automation makes it easy to adapt to the remote world—building human connections is the hard part.
  • Build a diverse candidate pool and create an interview process that removes implicit biases.
  • You need multiple data points to evaluate talent—the Screen is just one data point.
  • From sourcing to interviewing, to starting the onboarding process, having the right expertise partnership in place fosters transparency so different teams can work with each other to hire the right candidate.

Salesforce

Then, we talked with Tim Ahern, Recruiting Leader of Engineering and Technology at Salesforce, about how he identified core competencies that map to what a successful candidate looks like, without implicit biases.

Salesforce Tick Tock Talk

Key Takeaways: 

  • Structure leads to alignment and consistencies that help scale your business and turn on a dime. (Salesforce pivoted to fully remote interviewing in 7 days.) 
  • “Screen people in instead of screening people out.”
  • Having buy-in from executives early on (like a HackerRank Advisory Board) helps push decisions and changes through the organization.

UBS

Lastly, we spoke with David O’Brien, Group Technology Workforce Management at UBS, about how standardizing skillsets helped his team easily pivot into remote hiring across the globe. 

UBS Tick Tock Talk

Key Takeaways: 

  • Understand what the technology can offer you, and rebuild and redefine the recruitment process from there.
  • Standardizing skills globally leads to greater flexibility, higher quality, and a more diverse talent pool.
  • Invest in a tech council. Having the technologist drive the content fosters greater appreciation for candidates.
  • Don’t just discover new ways of processing candidates. Learn how to attract them to your organization.
  • Reimagine the entire process—think beyond automating your current paper process.

See you in 2021!

We’re thrilled with the outcome of our first virtual HackerRank.main() event and already receiving requests to attend next year. If you’re interested in learning more about the Developer Skills Platform, click here

HackerRank Main On-Demand Blog Banner CTA

 

The post HackerRank.main() Recap: Introducing the First End-To-End Platform for Remote Hiring appeared first on HackerRank Blog.

]]>
https://www.hackerrank.com/blog/hackerrank-main-recap-introducing-first-end-to-end-platform-remote-hiring/feed/ 0
Innovation Has Always Crushed Poverty https://www.hackerrank.com/blog/when-greed-is-good-innovation-has-always-crushed-poverty/ https://www.hackerrank.com/blog/when-greed-is-good-innovation-has-always-crushed-poverty/#respond Thu, 06 Aug 2015 00:11:44 +0000 http://bloghr.wpengine.com/?p=6893 For the first 10,000 years of human history, the world was very still, confined by...

The post Innovation Has Always Crushed Poverty appeared first on HackerRank Blog.

]]>

For the first 10,000 years of human history, the world was very still, confined by hand-crafted tools, acres of farmland and not much else. When the 18th century rolled around, suddenly there was exponential progress. Everything changed.

The Industrial Revolution introduced the definition of “innovation” as we know it today. It marked an era when inventions upon inventions proliferated society, boosting productivity through the automation of labor. The quaint English town of Birmingham was history’s original Silicon Valley, a fertile destination of the forward-thinking minds who created technologies that shook the rest of the developed world.

With each new tech-driven idea unleashed in the market, the progress bar for societal and economical prosperity shot up higher and higher. Income skyrocketed. Families had more food on the table. People lived longer. A middle class blossomed as folks below the poverty line capitalized on new, attainable job opportunities. But it wasn’t just a period of ground-breaking inventions that made this era significant--it was the longevity of impact still felt today.

In just the last 20 years, the rise of the software technology revolution produced a spurt of innovations with an economic impact similar to that of the 1760s. With each year that passes by, the speed and capacity has been exponentially increasing, while the cost of software and hardware has been plummeting. Thanks to the parallel expansion of world trade, today we’re witnessing another pivotal moment of human history. Developing worlds that were once frozen in the agricultural bubble are starting to burst into the 21st century. Rural dwellers in China, India and other eastern developing nations are moving to urban factories.

Just as innovation empowered the lowest social class with the opportunities to rise up, technological innovation will continue to pull poverty-stricken people out of the slums worldwide. This visualization by economist Max Roser reveals a fascinating, optimistic outlook on the future end of poverty. Even though our population has been exponentially increasing, for the first time ever, the absolute number of people in poverty is lower than ever before. In order to truly understand why absolute poverty started to deteriorate, we must go back to the eve of the Industrial Revolution.Screen Shot 2015-08-01 at 7.35.37 AM

When Innovation Gathered Steam: The Rise of Social Mobility

Many historians have dedicated their lives to figuring out why there was a sudden burst of significant inventions and desire for entrepreneurship particularly in the 1700s. Why then? Why there? Over 200 theories are circulating. But one fascinating explanation involves a shift in human character and thinking. Historian William Rosen posits:

In England, a unique combination of law and circumstance gave artisans the incentive to invent, and in return obliged them to share the knowledge of their inventions.

In other words, it was the first time that ideas were considered property--a source of profit. Until this point in time, Europeans had been drenched in religious domination, suffering from Metathesiophobia, or fear of change. But following the era of Enlightenment, commercialization of new ideas was a possibility.

So a diverse body of visionary industrialists, from educated scientists to uneducated merchants, set out to improve their way of life--profiting along the way. At this point, Britain was the only place in the world that developed a patent system.

It was not until the 17th century that patents were associated entirely with awards to inventors.... The Statute of Monopolies allowed patent rights of fourteen years for “the sole making or working of any manner of new manufacture within this realm to the first and true inventor…”

The 1700s England Industrial Revolution was the earliest form of the technological innovation that’s booming today. Not only were people empowered to become inventors but--for the first time--humans weren’t pegged into only two classes: “poor” or “elite.” Anyone had the opportunity to not only profit from their idea but also learn an in-demand operational skill that spawned from new product technologies. One such revolutionary job-generating technology was the steam engine, which scholars point to as the symbol of the Industrial Revolution.

“Thousands of innovations were necessary to create steam power, and thousands more were utterly dependent upon it, from textile factories—soon enough, even the water frame was steam-driven—to oceangoing ships to railroads,” - Rosen

The steam engine was the nucleus of innovation in Birmingham that spawned the Industrial Revolution. It was much like the way transistors spawned the Information Age in Silicon Valley. The mobility of steam as an energy source freed people from the chains of agrarian society. Factories could be built anywhere, spawning urbanization and opportunities for wealth. It helped develop a substantial middle class--a first for humankind.
Screen Shot 2015-07-31 at 1.37.41 PM

How Global Poverty Halved Since the ‘90s

Fast-forward to present day--even though the population has increased 7X---poverty is at an all time low, and rapidly falling. Between 1990 and 2010, absolute poverty fell from 43% to 21%--just about half. Poverty’s at an all time low even though population has increased.

So what extinguishes the flames of poverty? History tells us that a boom in economic growth and employment opportunities will produce a long-term negative impact on poverty. The Organization for Economic Co-operation and Development confirms that the rate of poverty decline is directly correlated with increase in economic prosperity in any given region.

For example, a flagship study of 14 countries in the 1990s found that over the course of the decade, poverty fell in the 11 countries that experienced significant growth and rose in the three countries with low or stagnant growth.

Since the 1990s to 2010, the developing countries’ GDP has been increasing by 6%, as depicted here:
gdp
Although there are quite a number of factors at play for the rise in economic size, optimizing productivity through smarter software tech plays a crucial role in bolstering developing economies. Africa is a prime example of how technology is helping its economy grow. In the last decade, African countries boasts a fast-growing startup market. Mobile phones have been one prolific economic growth engine, reducing the cost of communication, fueling productivity and opening up access to capital
mobile
One analysis of 96 different countries found that a 10% increase in mobile phone penetration can increase GDP growth by 1.2%!

So, on one end you have widespread use of mobile technologies in one of the most underdeveloped continents on the planet. The other side of the innovation loop: Mobile adoption was directly associated with a boost in the country’s economy. Above all, these studies show that economic prosperity, spawned by innovation, helps flatten poverty. This is just one example of an underdeveloped region benefiting from technological economic engine.

Emerging Worlds are Now Leapfrogging the Industrial Revolution

As a result of the large-scale tech adoption in developing countries, these folks are emerging out of their agrarian bubble to--not only catch up with the Industrial Revolution--but in some cases jump right into the digital age. Not only are some Africans skipping traditional institutions (like adopting virtual money instead of building banks) but Africa’s also turning into a hub of mind-blowing innovations. For example:

  • The BetaBook - A simple whiteboard that connects with a smartphone to help ease communication by instant translation with non-native speakers.
  • The iCOW App - As high-growth as Africa is, it’s still largely dependent on agriculture. But this app helps African farmers manage their cows. The app keeps track of the estrus stages of their cows, while giving them valuable tips on cow breeding and more.
  • The CardioPad - A young African engineer created a medical health tablet that performs heart examinations remotely in rural areas, helping save thousands of lives.
  • LifeStraw - It’s a portable water filter that claims to remove a minimum of 99.9% of waterborne diseases through instant filtration.

From its very first appearance as the propagator of the Industrial Revolution, innovation has always been an indirect suppressor of poverty. The motivation of turning profit from something that actually benefits the masses proves to be highly impactful. Innovation benefits both the inventor and the consuming market over time by creating more opportunities for economic growth.

When you zoom out and take a look at human history as a whole, you can see that greed isn’t always evil. When incentivized to profit from ideas, humankind began changing the world for the better. True innovation is a result of harmonious collaboration between those who engineer an idea and those who can actually implement it.

The biggest inhibitors of progress thus far have been governmental policies, which is largely why the developing world has been slow to adopt new technologies. But in the last few decades, we’ve seen immense progress that makes us feel optimistic about a future without poverty. Underdeveloped nations who were ridden with government regulations and policies, are now benefiting from open channels of commerce.

With the rise of globalization, tech companies can continue to help make the world flatter by innovating on both sides of the globe while profiting along the way.

The post Innovation Has Always Crushed Poverty appeared first on HackerRank Blog.

]]>
https://www.hackerrank.com/blog/when-greed-is-good-innovation-has-always-crushed-poverty/feed/ 0
How Programmers Freed Hollywood https://www.hackerrank.com/blog/how-programmers-freed-hollywood/ https://www.hackerrank.com/blog/how-programmers-freed-hollywood/#comments Mon, 29 Jun 2015 22:59:19 +0000 http://bloghr.wpengine.com/?p=6209 In Disney’s 1982 film Tron, software engineer Kevin Flynn is teleported inside a computer by...

The post How Programmers Freed Hollywood appeared first on HackerRank Blog.

]]>
In Disney’s 1982 film Tron, software engineer Kevin Flynn is teleported inside a computer by the evil supercomputer Master Control Program. The camera pans over an endless arena with geometric 3-dimensional floating objects above a white grid. Surrounded by a neon-etched digital world, Flynn looks around and whispers in disbelief: “Wow.”  At the push of a button, a laser rod appears in front of Flynn and he grabs the ends like handle bars. Within a split second, he’s propelled forward and inside a digitized Tron Lightcycle, battling two warriors in an endless maze. Voracious sounds of zooming cars are blaring.
This classic Tron Lightcycle scene was a phenomenal technological achievement in cinema. It marked the first extensive use of computer-generated imagery (CGI) illustrating the background, objects and movement in a feature film. For at least 15 minutes, Tron offered a glimpse into an unnatural world of boundless imagination and altered the way we visualized the world forever.
But who was the first to envision this new world and how did he push the limits that bring mind-bending innovations that pioneered the way to our favorite Hollywood films today, like Avatar, Transformers, Interstellar and countless other computer-generated masterpieces?

Coaxing Magic from a Machine

Before CGI made its debut in Hollywood, the first use of computer graphics was flight simulation technology at the tail end of WWII to track enemy aircraft in the 1950s. The government needed graphics on a radar screen to identify incoming aircraft into US airspace for the first time.

Screen Shot 2015-06-19 at 12.17.09 PM
The CGI technology first came from early tech company Mathematical Applications Group Inc. (MAGI), which developed software, called SynthaVision, to evaluate nuclear radiation exposure. It used ray-casting to trace the source of the radiation, but folks at the company realized they could use the same process to trace light and create images.
When 26-year-old director Steven Lisberger saw a sample reel of MAGI’s CGI, he saw an opportunity to bring video game characters to life on the silver screen, inspiring the animated film, Tron.

“I realized that there were these techniques that would be very suitable for bringing video games and computer visuals to the screen. And that was the moment that the whole concept flashed across my mind,” Lisberger says.

But in the early 1980s, animators couldn’t just obtain CGI software off-the-shelf. They had to approach software engineers to write CGI programs from scratch. Most all studios were extremely hesitant about the futuristic idea. Warner Bros, MGM and Columbia all turned down the production of Tron largely because of the enormous cost and time for an uncertain outcome. But Disney, which was historically more inclined to experimentation, saw one demo of the CGI and took a leap of faith by investing $20 million into the creation of Tron.
And so, Lisberger, who was decades ahead of his time, set out to find the right programmers to bring his vision to life with technology that was yet to be created. It took a technological village. Four different computer companies banded together to create the visuals behind the 15-minute CGI scene over the course of a year. Lisberger recruited folks from early tech companies that specialize in graphics:

  • MAGI
  • Triple-I
  • Robert Abel & Associates
  • Digital Effects of NY

Since there really was no “computer graphics specialist” job at the time, a combination of physicists, mathematicians, computer scientists and electrical engineers practiced the discipline at these early computer graphic companies. Each of the 4 groups had their own proprietary software for different aspects of CGI and worked in silos. While MAGI specialized in computer simulation of Light Cycles, Robert Abel & Associates did the color vector animation, for instance. Robert Abel & Associates’ Richard Taylor was the visual effects supervisor and pulled the strings to help everyone get on the same page and speak the same language.
Tron’s CGI was largely created using the vector graphics machine Evans & Sutherland Picture System on a PDP-11 computer with 2MB of memory and a disc holding not more than 330 MB.
It took hours to render each frame. And they wrote many brand new CGI programs specifically for each animation task. For example, the creative team really wanted Tron to look like the arena was a massively wide. Computer image choreographer Bill Kroye explains how they companies made it happen:

In real life you do that by softening the focus, and kind of dimming the colors. We came up with something that is very simple and I think is standard technique now in computer graphics which is called depth glowing. You assign a mathematical progression to the light of the points, depending how far away they are from the camera source. The farther away they are the less distinct they are, and that makes them look farther away. It’s something you automatically get in live-action photography. It’s something you have to mathematically apply to a computer image.

Creating Tron helped create the future of CGI in Hollywood. In fact, many of the programmers who worked at Robert Abel & Associates eventually went on to create Wavefront Inc., the first company to sell off-the-shelf CGI software.

A Giant Leap Forward & Three Steps Backwards for CGI Cinema

Although Tron was a huge technological achievement in cinema, it tanked in the box office, which prompted Hollywood to generally close its doors to CGI. In fact, the Academy eliminated Tron from the Visual Effects Award that year because they said it was “cheating. That’s how fearful people were of computers in the 80s.

Untitled Infographic
At the time of its inception and several years afterward, Hollywood producers were leery of going into a deficit financing for expensive computer generated graphics. In 1988, director of Outland Peter Hyams explains the sheer risk of using CGIs at the time:

“The thing that everybody detests about special effects is the amount of time, whether they’re done with computers or models or zebras, they’re terribly expensive and terribly slow…Ultimately someone goes and takes this very expensive piece of motion control equipment and it does what it does and the model flies just that way and you say ‘I don’t like it,’ and the guy says: ‘Well, that’s what was on the board, and that’ll be $35,000 please and it’s too late.”

Plus, as innovative and spectacular as Tron was, many directors felt that CGI could never completely replicate the essence of being human, which is the foundation of storytelling. It just looked…fake, many said. At the time, CGI technology wasn’t advanced enough to mimic the motions of a real person, like the way your elbows and hips move in concert when you walk. Or the different shades of your skin because of the warm blood running behind your skin. There was a long way to go in creating a truly digital alternate world and very few visionaries were willing to keep taking risks to continue trailblazing CGI on the silver screen.

‘George [Lucas] Doesn’t Know What He Has’

 In the early 80s, even the legendary director George Lucas wasn’t a full believer of CGI in his forthcoming Star Wars sequels. Alvay Ray Smith, the head of the newly built computer graphics branch of LucasFilm studio, explains how the Hollywood head honcho didn’t really have a clear vision for the potential of CGI. But, Lucas asked Smith to assemble a team of the best computer graphics visionaries of the time. Excited to be part of Lucas’ vision, Smith pulled in colleagues from the New York Tech, tech companies like Boeing, new grads from Cornell University and the Jet Propulsion laboratory. Curiously enough, Lucas didn’t ask them to work on CGI for his film. Instead, he gave them tasks that fulfilled not nearly half of their potential, like creating a digital film printer, audio synthesizer and controlled video editor.

“There were no requests from Lucas to do what we were really good at,” Smith says. “By this time, it dawned on me that he did not understand raster graphics. George doesn’t know what he has. Although Lucas was clearly a visionary in digitalizing Hollywood, the best I can say about his computer graphics vision at that time was that he allowed me to assemble the best team of computer graphics wizards in the world.” 

But it makes sense. The best directors’ number one priority is to create an enthralling story that makes you forget about any fancy technology at play. Lucas paid close attention to direction, not novelty graphics, Smith explains.
LucasFilm might have never delved into CGI if it wasn’t for Paramount Pictures, which contracted the Industrial Light & Magic (ILM) division of LucusFilm to create a digitized genesis scene in the upcoming film Star Trek II. At this time, ILM only worked with physical models, not CGI. Two lucky things helped propel a series of fortunate events that eventually led LucasFilm into the world of CGI. One, ILM just so happened to work in the building next to the LucasFilm’s computer department. Two, an ILM team member happened to be familiar with the computer folks next door.
And so the challenge was on. LucasFilm’s computer department would take on the contract work to create the genesis scene in StarTrek in which they bring a planet back to life.
genesis
But the real mission of the project was to, as Smith put it, “Knock george’s socks off.” The computer graphics team wanted to essentially create a “60-second long commercial to Lucas” by directing a move that Lucas would know could have only been created by a computer. Then, he’d fully realize what kind of talent he has at his disposal.

We proceeded to design a move [building emotional force] based on the idea of a spacecraft flying by a dead, moon-like planet with a camera attached to the craft..It is a twisting, spiraling, accelerating, decelerating, sweeping, reversing, minute-long, continuous camera move.

The plan worked. At the premier, Lucas was impressed and praised that particular camera shot and called on his computer graphics department to create 3D CGI in his next film, Return of the Jedi. You might say that Lucas was among the best Hollywood tech recruiters of all time! He built a team of some of the best CGI dreamers and stepped out of the way long enough for them to create something brilliant. Something he might have never imagined.

Cultivating a Generation of Dreamers

But it’d still be a couple more decades until CGI boomed as an independent industry. CGI grew in parallel to the widespread acceptance and popularity of personal computers, like the Apple computer. Still, Tron planted the seed for the next generation of dreamers to evolve what Lisberger started.

“At a certain point in time, the gears really lined up with the original Tron. And it turns out they didn’t forget that. And after 28 years, those kids are now producers, and studio executives, and they were now 35 and 40, and in a position to make the film and take their own kids.”

Forward-thinking programmers in Hollywood inspired a new generation of innovators through both the actual technology and the films. Today there are almost 70,000 multimedia artists and animators creating CGI magic for film and TV in just the US alone. The majority of top highest-grossing films since the late 90’s have all been shot with computer-animated effects.

  • Avatar (2009) – $2.8 billion
  • Titanic (1997) – $1.8 billion
  • Harry Potter and the Deathly Hallows, Part 2 (2011) – $1.3 billion
  • Jurassic Park (1993) $357 million
  • Terminator 2 (1991) $519.8 million

Just think, without programmers, we might have never seen the ill-fated Titanic sink. We might have never stepped into the alternate universe of Avatar. Or we might have never met Wall-E, Gollum or Woody! In the words of Robert Abel, a pioneer in computer graphics: “Technology just frees us to realize what we can imagine. It’s like being given the power to do magic.”


Like this article? We’ll shoot you an email the next time we have a great post to share! Please sign up below. 

The post How Programmers Freed Hollywood appeared first on HackerRank Blog.

]]>
https://www.hackerrank.com/blog/how-programmers-freed-hollywood/feed/ 4
Democratizing Healthcare Through Technology https://www.hackerrank.com/blog/democratizing-healthcare-through-technology/ https://www.hackerrank.com/blog/democratizing-healthcare-through-technology/#comments Fri, 19 Jun 2015 23:52:51 +0000 http://bloghr.wpengine.com/?p=5627 This article originally appeared on Forbes.com. You’ve probably read plenty about the trendiest fitness consumer...

The post Democratizing Healthcare Through Technology appeared first on HackerRank Blog.

]]>

This article originally appeared on Forbes.com.


You’ve probably read plenty about the trendiest fitness consumer gadgets, like the Fitbit, Jawbone or Apple HealthKit. But there’s a more significant healthcare revolution emerging right now at the convergence of affordable mobile tech and widespread broadband network connectivity. The mobile health revolution, widely dubbed “mHealth,” is empowering developing nations with more affordable, accurate and accessible healthcare than ever before. And it’s just the beginning.

The incredible speed of mobile phone and smartphone adoption will revolutionize health education and access to care. Mobile technology has penetrated nearly every corner of the world. Just think: There are at least 6 billion mobile subscriptions worldwide, according to the The International Telecommunication Union. That’s almost the entire global population.

Photo_1
This is in line with Analyst Chetan Sharma’s report that famously says people in the most underdeveloped areas are accessing mobile phones more often than basic essentials, like electricity and drinking water.

Photo_2

How Did Mobile Tech Start Booming in Slums?

This is a fascinating era of technology and healthcare. Just about a decade or so ago, street vendors in slums of the developing world were confined in their means of survival, owning not much more than the clothes hanging loosely on their bodies. Whether by selling crops or cycling a rickshaw, most people living in the slums of developing countries like India, South Africa and Kenya have been limited in their access to basic needs for centuries.

“In the past two years, a billion phones have been sold, almost all of them to the poorest 20 per cent of the world’s population.” – Journalist Doug Saunders.

And today many slum dwellers hold in their palms access to information and connectivity, which can boost their livelihood, health and well-being. Technologists today have an opportunity to make a true difference and flatten the world by creating products and technologies to open up healthcare access to everyone.

But why would folks living in villages save up their hard earned money for a mobile phone over basic essential items, like clean water or toothbrushes? There are quite a few reasons why cheap mobile phones, like Nokia and Motorola, specifically have their sights set on folks living below the poverty line. First, it’s a major economical investment. For slum dwellers, saving up an entire month’s salary to buy access to valuable information makes sense in the long run. For instance, in Kibera, people are actually using mobile phones to scope out the cleanest water supply. One Stanford research group traveled to the Kenyan slum and found that the community systemized safe water identification through mobile phones:

Lugaka dials *778# onto the phone’s large buttons. A few seconds later, a SMS message pops up on the phone’s small screen prompting him to press ‘1’ for water, ‘2’ to sell water or ‘3’ to file a complaint. He presses ‘1’ and a list of villages appear that have water available that day. Next to each landmark is the cost of water that day.

Plus, consider just how much cheaper phones have gotten. Here’s a graph that depicts the price drop in Indian brand smartphones, down by more than 50% in just about one year. And this doesn’t count the second-hand market, which is undoubtedly even cheaper and just as functional.

Photo_3

Earlier this year, Microsoft broke the record for introducing the cheapest mobile phone to the global market: The Nokia 215. At just $29, you get Internet connectivity, a camera and a battery that could last almost a month on standby. Even if people can’t access the Internet from their phones, software services like Binu use Cloud technology to deliver Internet access to ordinary mobile phones.

There’s still quite a long way to go for total network broadband connectivity in rural areas. However, here’s a bright spot: The number of fixed wireless broadband subscriptions has doubled from 472 million in 2011 to an estimated 1.16 billion in 2013, surpassing the number in developed countries. Both tech giants Google and Facebook are also working on initiatives to bring connectivity to the lesser developed nations.

mHealth Transforms Patient Care

Photo_4

mHealth has the power to transform health in a range of sophistication. At the basic level, info updates via SMS from health services can be monumental. Informational text services, like Text4Baby and MobileMidwife, offer pivotal advice on maternal care on a weekly basis. Just like Kiberian folks who use SMS to report clean water, Indians and Sri Lankans are using mHealth tech to speed up public health reporting of Dengue Fever, a mosquito-borne disease. This type of awareness through basic SMS services can slow down diseases.

On a slightly more advanced level of the mHealth spectrum, clear smartphone cameras are powerful medical tools. ClickMedix software, for instance, allows you to get tele-consultations by sending photos, videos or even text messages. Indian health workers who have patients suffering symptoms of deafness, for instance, can use ClickMedix’s online platform to send health data to ENT surgeons who oversee the program and confirm the diagnosis and treatment plans. Similarly, in Tanzania, mHealth startup iDoc24’s mobile app First Derm diagnoses patients in isolated rural communities…replacing the alternative 5 hour bus ride to the closest hospital.

On the most advanced end of the mHealth spectrum, smartphones–which are also dropping in prices and booming in developing countries at large—have sensors that can pick up health data from basically any physiological metric, like eye pressure and brain waves. These sensors have the capacity to manage chronic illnesses without paying a visit to your doctor.

The AliveCor heart monitor is one amazing mHealth innovation that allows you to perform routine heart check-ups. It’s an iPhone case with built-in electrosensors. All you have to do is press it against your chest and you can perform a routine heart check up. EyeNetra is another cool mHealth startup, bringing vision correction to the masses. It’s a plastic clip-on attachment for your smartphone that replaces the $10,000 auto refractor machine found in your optometrist’s office. Its on-screen visual test spits out a prescription…all for under $30.

This is Just the Beginning for mHealth

Of course, as with any ancient, government-regulated institution, innovation doesn’t come without bureaucratic hurdles. If you walk into any healthcare facility with an app idea in the US, you’ll likely get a few alarmed faces. The risks associated with losing or mixing patient data or misdiagnosis is far too high since mHealth is so new. It’s why there’s been some pushback for a centralized online database of healthcare records.

But there are indications that we’re moving forward. For instance, the FDA published a guideline document solely for developers who want to create medical mobile apps. Furthermore, the Affordable Care Act mandate incentivizes hospitals to not just treat patients but keep them healthy. Since mHealth reduces costs of office visits and equipment, global governments, hospitals and insurance companies have strong incentives to keep people healthy through mHealth preventative screening tools.

Healthcare regulations mean innovation may take longer to become completely ubiquitous, but the possibilities are worth maneuvering the red tape. As federal governments start standardizing the development of mHealth tools, more developers will collaborate with doctors to create self-diagnostic tools.

This article only touches on a few aspects of mHealth’s boundless potential in making the world flatter and healthier. For a more holistic picture of the future medical model, take a look at this fascinating chart recently Tweeted by Eric Topol, author of The Patient Will See You Now.

Photo_5

We’re at the verge of a movement that’s shedding away old medicine practices that heavily rely on institutionalized health management. If mHealth continues to proliferate at this rate, medical treatment will become incredibly convenient and efficient. Already, we’re seeing the clever ways that underdeveloped nations are leveraging mobile tech to benefit their well-being. Eventually, everyone in the world will have the capabilities to perform preventative care from the palms of their hands, saving cost, time and–above all–lives.

The post Democratizing Healthcare Through Technology appeared first on HackerRank Blog.

]]>
https://www.hackerrank.com/blog/democratizing-healthcare-through-technology/feed/ 1
War, Passion & the Origin of Computer Societies https://www.hackerrank.com/blog/war-passion-the-origin-of-computer-societies/ https://www.hackerrank.com/blog/war-passion-the-origin-of-computer-societies/#comments Thu, 28 May 2015 00:42:28 +0000 http://bloghr.wpengine.com/?p=4335 Every computer scientist knows the Association of Computing Machinery (ACM) and the Institute of Electrical...

The post War, Passion & the Origin of Computer Societies appeared first on HackerRank Blog.

]]>
Every computer scientist knows the Association of Computing Machinery (ACM) and the Institute of Electrical and Electronics Engineers (IEEE). With over 160,000 members collectively worldwide, ACM and the IEEE Computer Society are the largest catalyst for bringing together the most enthusiastic, determined and intelligent minds devoted to advancing computing technology.
But how did such computer science organizations emerge? Tracing the origins of today’s largest computing organizations reveals a fascinating story of passion for a new trade at the culmination of WWII.
How it All Began
It was 1946, and pressure was high in advancing technology in the face of warfare. A team of scientists at the Moore School of Electrical Engineering in Pennsylvania introduced the world to the very first powerful, multipurpose digital computer: the ENIAC (Electronic Numerical Integrator And Computer). It was originally designed for the army to calculate artillery firing tables with 1,000 times more power and speed than traditional machines.
As early as the 1930s and even after the war ended in 1945, the national department of defense depended on mathematicians, engineers and scientists to keep improving technology for not only weaponry but also logistics, communications and intelligence in labs across the nation.
As a result of the war, the demand for more mathematicians, statisticians and engineers to iterate on such computing devices spiked dramatically. Look at the spike in demand for mathematicians and statisticians between 1938 and 1954:

Mathematicians Demand Growth (1)
Because of the covert wartime operations, many of the inventions and advancements remained behind closed lab doors. It wasn’t until February of 1946 that the ENIAC was introduced to the world in the press, often referred to as the “Giant Brain.” Intrigued by automatic computing, researchers saw the potential value of computers for other other areas as well.  For scientists, this was a massively powerful machine with immeasurable potential of computing. Just think, unlike any other existing machine,  it could solve 5,000 addition problems in 1 second.
There was so much more to explore, understand and scientifically test. It signified the birth of a brand new field. It quickly became an exciting topic of imagination and discussion for industrialists across the nation.
The Origin of IEEE’s Early Computer Societies

It was in ENIAC’s same birth year and city where the first computing committee of IEEE began: The Computing Device Committee (CDC). At this time, the IEEE was still split as two, rival societies: American Institute of Electronic Engineers (AIEE) and Institute of Radio Engineers (IRE). Both formed their own committees dedicated to understanding the new field of computing.
For instance, one mission of the IRE’s new technical committee on electronic computers was to standardize the glossary of the emerging field of computer science. It sounds mundane to us now, but someone had to come up with uniform names for brand new concepts. One hot debate was what to call the increased speed of switching circuits.  It was almost going to be called “Babbage,” most likely after Charles Babbage, a father of the computer. Ultimately, they voted on the term “nanosecond.”
The founding members of these committees were some of the most forward-looking minds behind early computing inventions:

The interest in computing grew swiftly and in 1951, IRE decided to establish a paid membership based group (like ACM): Professional Group of Electronic Computers (PGEC). It grew from about 1,100 paid members in 1954 to over 8,800 paid members at the end of the decade. Eventually, the different computing committees joined forces to create one giant Computer Group and later Computer Society.

Screen Shot 2015-05-27 at 10.02.10 AM
The Origin of ACM

As the ENIAC sparked an uptick in gatherings to discuss digital computing, one pivotal convention was the Symposium on Large-Scale Digital Calculating Machinery in January 1947. Over 300 technical experts from universities, industry and government met at Harvard University to watch technical paper presentations and a demonstration of the Mark I Calculator.
It was at this symposium where computer pioneer Samuel H. Caldwell first expressed a need for a dedicated association solely for people interested in computing machinery. Sure, there were computing committees as arms of larger related organizations (e.g. AIEE’s CDC), but there needed to be a better way for interested computing experts to exchange ideas, publish official journals and tackle challenges across these organizations.
By summertime, there was modest support around the idea and a “Notice on Organization of the Eastern Association for Computing Machinery” was sent to anyone who might be interested in computers. Just like the founding members of the first computing committee at AIEE, ACM’s founding council were also accomplished computing pioneers:

  • R.V.D. Campbell worked on the Harvard Mark I-IV.
  • John Mauchly co-designed the first general purpose computer and first commercial computer.
  • T. K. Sharpless contributed to the design of the high-speed multiplier.

On September 15, 1947, about 48 people met at Columbia University and formally voted on starting the association and elected a board. At the first meeting, TK Sharpless talked about the Pilot model of the Edvac, a stored program computer. In the following meeting that same year, they covered 13 technical papers in one meeting! And, this time, over 300 people joined in.
Because interest was catching on in the community, by 1948, they decided to drop the “eastern” in the name and expand the association. Both the membership and the value of the association grow pretty rapidly early on.  Membership just about doubled between 1949 and 1951. Even though prices increased to support expansion from $1 annually in 1947 to $10 annually in 1961, more people kept joining. In fact, some notable founding members, like Sharpless and Concordia, belonged to both ACM and IEEE’s Computer Society.
Growth in Numbers of ACM (1)

line

A Passionate Pursuit by Forward Thinkers: Edmund Berkeley & Charles Concordia  
You’d think the biggest champions of the ACM & IEEE Computer Society would be the leaders who invented the first electronic automatic computers, like the ENIAC or Atanasoff–Berry Computer, right? Well, you’d be wrong.
Although many of the early fathers of automatic computing machinery played integral roles as presiders and council members of ACM and IEEE Computer Society, the early champions and heavy lifters of both computing societies weren’t early industry or government inventors of the modern automatic computing machine. They were admirers and researchers, who passionately believed in the significance of these computing advancements.
Dr. Charles Concordia Led the AIEE’s CDC

charles_concordia
At the time of AIEE’s inception of its first computer committee, Dr. Charles Concordia was a prominent electrical engineer. He was an early computer user rather than an inventor. His work in electrical engineering at General Electric laboratory frequently required the use of the differential analyser (an analog computer), which was housed at the Moore School of Engineering.
Here, he was exposed to a lot of the new electronic computing devices, including the ENIAC, and saw something with incredible potential. As an active member of the AIEE, he knew there needed to be a more concerted effort in understanding and advancing the future of computing. And so, without any background or experience in building early computers, he boldly presided as the chairman of the CDC and pulled other computer pioneers, like John Grist Brainerd, who famously worked on the ENIAC project, to form the first computing committee in 1946.
It’s interesting that someone who specialized in detecting cracks in railroads, designing generators and advising on a pump hydro storage project would lead a committee entirely dedicated to exploring automatic computing, like the ENIAC. Computer science was too new for him to definitively know what impact computing would have on his field of electrical engineering.
Edmund Berkeley: The Man Behind ACM

edmund
Edmund Berkeley is cited by multiple people (here and here) as the sole person who originated the ACM. While Berkeley was an expert in early computers, first by working on the Mark II during WWII, and then by working on the computerization of Prudential Insurance Company, he wasn’t an early modern computer inventor at the time. Rather, he was a passionate writer, editor and publisher of computing as it relates to society and education. Later on, he created an educational toy, Simon, that taught people more about coding.
He diligently worked to connect with interested parties across groups from different regions and laboriously did all of the secretary work that no one else wanted to do for 6 years without pay. Berkeley manually mimeographed documents to members as the founding secretary.
What propelled him to work so hard in creating ACM as the sole driver? Berkeley was highly vocal about computing as a means to understand fundamental problems of the world. He wanted to advance technology so that it could touch everyone’s lives positively. This required a free flow of information…something that the war prevented thus far and this association helped facilitate further.

“I read somewhere that the Soviets thought people ought to be taught about computers based on what 20-30 experts have to say. That’s stupid. What ought to be taught about computers is a result of looking at the world and seeing what needs to be taught about computers….I think what the ACM should concentrate on is making a list of the nine most important problems in the world. And then if they have the time left over, publish junk that only 50 people can understand.”

The Lasting Legacy of Passion in Computer Science for the Greater Good

During a time when computer science wasn’t even an accepted discipline, the creation of ACM and IEEE’s early computer groups offered a haven of bountiful access to exciting resources, ideas and inspiration from people at the forefront of this brand new science.
Created by passionate believers and eventually led by pioneers of the early computing history, these organizations were responsible for turning the mysterious, complex and wartime computer mainframes into an educational discipline.
Early on, ACM and IEEE’s Computer Society’s primary activity was to arrange national meetings and publish journals that helped connect the world with leading experts who helped cement computer science as an educational discipline. Until these associations were formally created, there was no easy way to reach academics or researchers across the nation who are working on solving similar problems or to even learn more about computer science.
The tradition has lived on today as software engineers, academics and students from all over the world still convene at ACM and IEEE to challenge themselves in solving the world’s toughest problems. Both have committees that help shape today’s computer science education, research and innovative advancements of software computer technology of the future.

The post War, Passion & the Origin of Computer Societies appeared first on HackerRank Blog.

]]>
https://www.hackerrank.com/blog/war-passion-the-origin-of-computer-societies/feed/ 7