MaxCompute 2.0 The Evolution of NewSQL

At the Alibaba Cloud MaxCompute session during the 2017 Computing Conference held on October 14, Lin Wei, computing platform architect of Alibaba delivered a speech titled MaxCompute2.0: The Evolution of NewSQL, sharing the efforts that have been made to optimize NewSQL for MaxCompute 2.0.

In an era of DT where a growing numbers of companies move their application data to the cloud, NewSQL has become a hot topic in the industry, for it offers a great way for users to access and store data using APIs. This article discusses the background of Alibaba Cloud MaxCompute's adoption of NewSQL, the key technologies used, and so on.

Background

When it comes to NewSQL, SQL is an inevitable topic. The "data processing" and "database" that people mentioned in the 1980s and 1990s generally referred to the use of DataBase. DataBase is a relational database with strong structures and semantics, which allows anyone to make a quick interactive query when writing a query language. However, as massive amounts of data is generated with the rapid development of the Internet, traditional databases now face a series of challenges.

The first challenge lies in its poor horizontal scalability. In an Internet environment, the traditional DataBase would find it hard to support structured and unstructured data as well as voice and video data, resulting in a lack of flexibility. Moreover, the traditional DataBase is weak in fault tolerance. In a distributed environment, data centers are required to carry large volumes of data, which requires a high level of fault tolerance. Therefore, SQL has become inadequate to handle the surge of big data applications, leading to the birth of NoSQL, which is used in processing unstructured data.

A NoSQL database is a non-relational database with weak semantics and good flexibility. With the ability to handle unstructured, semi-structured, and structured data, the database scales well horizontally. NoSQL provides powerful User-Defined Functions, which can define the key-value pairs for map and reduce functions to process data. The database also offers a flexible API to support non-relational computations. Because all the nodes involved in the computing are separate from each other, the database is excellent in fault tolerance. It is obvious that NoSQL is far ahead of SQL in big data processing, which has led to a new wave of big data offerings like Google's BigTable and MapReduce.

Now, Alibaba Cloud has launched its own version of NewSQL with an aim to solve what SQL and NoSQL cannot on their own.

NewSQL

The concept of NewSQL is to go back to the relational database. When working with a NoSQL database, programmers must define map, reduce, value and many other parameters, respectively. Therefore, it is hard for them to clarify what they are working on unless all coding details are presented. The intention of going back to the relational model is that programmers can describe what to do instead of how to do it. This way, people can know what to work on as soon as they read the NewSQL database.

A strong system optimization capability is required for NewSQL to do its job. With a powerful optimizer, NewSQL can integrate a variety of functions to ensure the system can make the efficient physical execution plan in an adaptive way. In this process, we need to keep the unique features of NoSQL such as the ability to process unstructured data, a great collection of UDFs, and distributability.

If the users write an efficient execution plan with NoSQL, the following problems will arise. Firstly, programmers are unable to notice data and environment changes in time, which easily leads to a data skew problem. Secondly, with the increasingly high computing complexity and barriers existing both upstream and downstream, it is impossible for programmers to quickly work out the best execution plan. Meanwhile, computing requires knowledge sharing which will be impeded by the lack of a high-level language with strong semantics. In an environment where resources are shared, it is hard for a programmer alone to capture a big-picture perspective. Hence, we need to turn back to NewSQL, where programmers describe what they want to be done, and the system produces the efficient execution plan through optimization.

NewSQL shows great adaptability under the three scenarios as shown in the figure. It is hoped that programmers can describe their tasks better, but given the lack of flexibility, UDFs need to be used to bring balance to high-level semantics, and thus ensure highly performing, intelligent, and adaptive system optimization.

As a matter of fact, the whole industry is moving forward in this direction. For instance, in the delivery of the Dryad engine, Microsoft also offers Scope to do the optimization. DataBricks offers SparkSQL in addition to the Spark package to accelerate iteration. Hadoop has gone through the evolution from MapReduce, to Hive, and then to Hive 2.0. And Google is promoting Spanner which supports SQL semantics in addition to MapReduce. Alibaba Cloud's MaxCompute 1.0 is now stepping into MaxCompute 2.0, where system optimization is brought into play.

Key Technologies

Knowledge of some key technologies is required to strike a balance between SQL and NoSQL.

The ability to handle unstructured, semi-structured, and structured data is required.

In an Internet environment, users need to provide serialize and deserialize functions to enable dynamic conversion from unstructured to structured, so that structured data can be extracted for computation. To get over the limitations of traditional databases, the support for user-defined functions is needed to diversify the UDF features and to allow for better interaction between various programming languages. Users also need to define partitions to connect the upstream and the downstream, and to connect the input and output ends with other Internet applications.

DAG execution is needed.

This is for getting over the limitations of MapReduce so that the system can deploy loops and iteration to DAG. An asymmetric graph is also needed to support the complex physical execution plan. Only by doing so can the optimizer produce the efficient execution plan and makes the language complete.

Most importantly, we need to have a complete UDF system.

A complete collection of UDFs reduces the relational model to a functional language, which can be used to create different DAG execution plans. To ensure flexible interaction at the language level, functions like serialize/deserialize, join, aggregate, and processor (supports hash, range, and direct hash) are provided.

Powerful Optimizer

A powerful optimizer helps optimize data storage, from a single statement to thousands of store procedures. NoSQL adopts a functional programming language that can create rather complex graphs. With traditional databases, however, statements are committed one by one, which results in a poor job sharing experience. A powerful optimizer can be used to write more complex query storage procedures. This in turn results in a huge logic execution plan and a larger room for optimization. Thus a more advanced optimizer is needed to migrate away from rule-based optimization to cost-based optimization.

In addition, a distributed approach should be taken into account for a different optimizer. Take a Non-SQL scenario for example, many UDF extensions, whether they are for data, user, or computing, can be used to produce great execution plans.

The following figure show an interesting example about how a optimizer and the UDFs work together.

On the left is the result of not understanding the UDFs. In this case, the optimization is unsatisfactory and the output properties of the UDFs are not sensed, which results in an inefficient physical execution plan. On the right shows a healthy interaction between the UDFs and the optimizer. In this case, the optimizer achieves global optimization, and interacts well with the user to understand the UDF properties, and that is a shift from black box optimization to gray box optimization.

A real example

In the use case as shown in the figure, the cost for a distributed model is dramatically reduced, and both flexibility and optimization are achieved. In fact, some UDF properties are worth thinking:

A. Row-wise? Monotonic function?
B. Keep some columns unchanged (such as pass through)?
C. Keep the "clustered by" column unchanged? Keep the "sorted by" column unchanged?
D. Selectivity, data distribution of output, and more.

The optimization in a distributed scenario works in a different way as compared to a stand-alone SQL database. Due to vast amounts of NoSQL UDFs, a variety of dynamic environments in a distributed scenario (such as the topology to assign workers and distribution of Failure Region), achieving a balance between run-time and compile-time optimization needs a powerful engine which can perform optimization at run time. Run-time optimization includes determining the number of partitions and the boundary, selecting the Join method, and an efficient Datashuffle approach.

Conclusion

NewSQL is designed to help developers develop programs more efficiently and achieve interactive computing by solving what NoSQL and SQL cannot on their own. With its powerful system optimization capability, NewSQL is expected to be highly available, interpretable, performing, and adaptive, so as to drive the boom of the whole MaxCompute ecosystem.

时间: 2024-09-11 06:03:39

MaxCompute 2.0 The Evolution of NewSQL的相关文章

MaxCompute 2.0: Evolution of Alibaba's Big Data Service

The speech mainly covers three aspects: • Overview of Alibaba Cloud MaxCompute • Evolution of Alibaba's Data Platform • MaxCompute 2.0 Moving Forward I. Overview of Alibaba Cloud MaxCompute Alibaba Cloud MaxCompute is formerly known as ODPS, which is

MaxCompute 2.0 NewSQL演进之路

10月14日,2017杭州云栖大会·阿里云大数据计算服务(MaxCompute)专场,阿里巴巴计算平台架构师林伟分享了主题<MaxCompute2.0 NewSQL演进之路>,介绍阿里云大数据计算服务MaxCompute 2.0在NewSQL上所做的优化和实践工作.   DT时代,越来越多的企业应用数据步入云端,NewSQL也成为业内越来越热的话题,它可以帮助用户通过编程接口良好地访问和存储数据.本文将介绍阿里云MaxCompute应用NewSQL的背景.关键技术解读等内容.     背景  

MaxCompute 2.0:阿里巴巴的大数据进化之路

本次分享将主要围绕以下三个方面: 阿里云大数据计算服务概述 阿里巴巴数据平台进化之路 MaxCompute 2.0 Moving forward    一.阿里云大数据计算服务概述 阿里巴巴大数据计算服务MaxCompute的前身叫做ODPS,是阿里巴巴内部统一的大数据平台,其实从ODPS到MaxCompute的转变就是整个阿里巴巴大数据平台的演化过程.所以在本次会着重分享阿里巴巴大数据在过去七八年的时间所走过的路以及后续技术发展大方向.   首先做一个基本的定位,大家可以看到下面这张图是一个航

如何申请试用MaxCompute 2.0

MaxCompute 2.0 上线以来很多同学都在询问如何才能获取试用资格.在这里向大家简要介绍MaxCompute 2.0发布的功能,申请方式及如何使用. 大数据计算服务(MaxCompute) 快速.完全托管的TB/PB级数据仓库解决方案,向用户提供了完善的数据导入方案以及多种经典的分布式计算模型,能够更快速的解决用户海量数据计算问题,有效降低企业成本,并保障数据安全. 了解更多 MaxCompute 2.0发布的功能包括: 更快的SQL执行引擎:降低企业大数据分析成本.SQL执行效率更高.

MaxCompute 2.0—从ODPS到MaxCompute

从ODPS到MaxCompute-阿里大数据的进化之路是一个商用大数据系统发展史,一个商业大数据系统要解决的问题有可靠性,高性能,安全性等等六个方面.内部产品名ODPS的MaxCompute,是阿里巴巴内部发展的一个高效能.低成本,完全托管的"EB级"大数据计算服务.从2009年9月阿里云成立,愿景就是做运算/分享数据第一平台:2010年4月,伴随阿里金融的贷款业务上线,ODPS正式投入生产运行,2012年建立统一数据平台,2013年具备超大规模海量数据处理能力,2014~2015年大

刷新大数据技术知识新高度的MaxCompute 2.0就要来了

中国云计算产业最具影响力的盛会之一--2016杭州云栖大会(https://yunqi.aliyun.com/)将在云栖小镇召开.连续举办七届的云栖大会一直是业界了解阿里云计算生态发展和应用趋势.体验前沿技术和产品的最佳平台,来自海内外的上万名开发者.创业者聚集于此,分享着他们对云计算的思考与实践经验.7年来,从产品发布到行业解决方案展示,从关注技术到技术与服务并重,从单一的客户到生态全景的展现,大会的核心内容一直在"进化",而2016年杭州云栖大会,则以"飞天・进化&quo

Vectorized Execution Engine in MaxCompute 2.0简介

文章转自ruanxi 前言     在<数据库系统中的Code Generation技术介绍>一文中,我们阐述了代码的CPU执行效率对于大规模分布式OLAP系统的重要性.现在简单总结如下: OLAP系统中查询往往比较复杂,比如多表Join, 各种聚合函数以及窗口函数,其中涉及大量的Hash计算(比如采用Hash Join, Hash Aggregation),排序(比如采用Merge-Sort Join)操作,CPU开销比较大. SSD等高性能存储硬件的使用,以及内存计算的普及(比如Spark

数据智能需求旺盛 阿里云MaxCompute 2.0华南区开服

8月31日,阿里云宣布将在深圳区域正式部署大数据计算服务MaxCompute,以进一步提升对华南区域客户服务的响应速度,并推动该地域人工智能产业的发展. 此前,凭借大规模计算存储.多种计算模型.强数据安全和低成本的优势,MaxCompute一直供不应求. 官方预计,深圳区域将于9月7日正式开放售卖,此后阿里云MaxCompute还将在年底前陆续新开北京.杭州.香港.新加坡.美国.日本.欧洲等节点.这显示了,阿里云正将MaxCompute这样的"核武器"快速推向全球市场. 新节点将部署M

MaxCompute 2.0 基于BigBench标准的最新测试进展

10月14日,2017杭州云栖大会·阿里云大数据计算服务(MaxCompute)专场,阿里云技术专家路璐带来<MaxCompute基于BigBench标准的最新测试进展>分享.   在11日的主论坛上,MaxCompute做了敢为人先.引领潮流BigBench On MaxCompute2.0的重磅发布,意味着MaxCompute成为第一个做到100TB数据规模的BigBench,并且在100T数据规模的Qpm达到7830Qpm,成为第一个达到7000分的大数据引擎.同时MaxCompute也