MaxCompute supports the pay-as-you-go billing method for SQL, MapReduce, Spark, Mars, and MaxCompute Query Acceleration (MCQA) jobs.
- Pay-as-you-go: You are charged for each job based on the resources consumed by the job. This billing method is used for standard SQL jobs, SQL jobs that reference external tables, MapReduce jobs, Spark jobs, Mars jobs, and MCQA jobs.
- Subscription: You can subscribe to specific resources.
MaxCompute supports SQL, MapReduce, Spark, Mars, MCQA, Graph, and machine learning jobs. You are charged for SQL, MapReduce, and Spark jobs but not user-defined functions (UDFs). You are charged for Mars jobs from September 1, 2020. You are charged for MCQA jobs from October 1, 2020. You are not charged for other types of computing jobs.
Subscription
Resource | Memory size | CPU core | Price (USD per month) |
---|---|---|---|
1 CU | 4 GB | 1 | 22.0 |
After you purchase subscription computing resources, you can monitor and manage the resources by using MaxCompute Management. For more information, see Use MaxCompute Management.
We recommend that you select the pay-as-you-go billing method the first time you use MaxCompute. If you select the subscription billing method, you purchase a specific amount of computing resources. If you are a new user, you may consume fewer resources than the purchased resources. Some resources may remain idle. In this case, we recommend that you use the pay-as-you-go billing method. The pay-as-you-go billing method is more cost-effective because you are charged based on the amount of resources that you consume.
Billing for standard SQL jobs
Each time you run an SQL job, MaxCompute calculates the fee based on the amount of input data in computing and SQL complexity. On the following day, MaxCompute aggregates the fees for all executed SQL jobs into one bill within your Alibaba Cloud account. Then, MaxCompute deducts the fees from the balance of your Alibaba Cloud account.
Fee for a standard SQL job = Amount of input data in computing × SQL complexity × Unit price of a standard SQL job
Item | Unit price |
---|---|
Standard SQL job | USD 0.0438 per GB |
- Amount of input data in computing: the amount of data scanned by an SQL job. Most SQL jobs support partition filtering and column pruning. Therefore, in most cases, this value is less than the amount of data in the source table.
- Partition filtering: If you submit an SQL statement that contains the
WHERE ds > 20130101
clause.ds
in the clause is the partition key column. You are charged only for the data in the partitions that are read. - Column pruning: If you submit the SQL statement
SELECT f1,f2,f3 FROM t1;
, you are charged only for the data in columns f1, f2, and f3 of table t1. You are not charged for the data in the other columns.
- Partition filtering: If you submit an SQL statement that contains the
- SQL complexity: The complexity of an SQL job is calculated based on the number of keywords in the SQL statements of the SQL job.
- Number of SQL keywords = Number of JOIN clauses + Number of GROUP BY clauses + Number of ORDER BY clauses + Number of DISTINCT clauses + Number of window functions +
MAX(Number of INSERT statements|Number of UPDATE statements|Number of DELETE statements - 1, 1)
. - Calculation of SQL complexity:
- If the number of SQL keywords is less than or equal to 3, the complexity of an SQL job is 1.
- If the number of SQL keywords is less than or equal to 6 but greater than or equal to 4, the complexity of an SQL job is 1.5.
- If the number of SQL keywords is less than or equal to 19 but greater than or equal to 7, the complexity of an SQL job is 2.
- If the number of SQL keywords is greater than or equal to 20, the complexity of an SQL job is 4.
For more information about SQL keywords, see JOIN, GROUP BY, ORDER BY, Window functions, INSERT, and UPDATE and DELETE.
- Number of SQL keywords = Number of JOIN clauses + Number of GROUP BY clauses + Number of ORDER BY clauses + Number of DISTINCT clauses + Number of window functions +
COST SQL <SQL Sentence>;
odps@ $odps_project >COST SQL SELECT DISTINCT total1 FROM
(SELECT id1, COUNT(f1) AS total1 FROM in1 GROUP BY id1) tmp1
ORDER BY total1 DESC LIMIT 100;
Intput:1825361100.8 Bytes
Complexity:1.5
1.7 × 1.5 × 0.0438 = 0.11 USD
- The bill is generated before 06:00 on the following day.
- You are not charged for failed SQL jobs.
- You are charged for SQL jobs based on the amount of data after compression, which is similar to the storage service.
Billing for SQL jobs that reference external tables
Since March 2019, you are charged for MaxCompute SQL jobs that reference external tables based on the pay-as-you-go billing method.
Fee for an SQL job = Amount of input data in computing × Unit price of SQL jobs that reference external tables
Item | Unit price |
---|---|
Standard SQL job | USD 0.0044 per GB |
- The bill is generated before 06:00 on the following day.
- For jobs that reference internal and external tables, MaxCompute separately calculates the fees for jobs that reference internal tables and jobs that reference external tables.
- You cannot estimate the fees for SQL jobs that reference external tables.
Pay-as-you-go billing for MapReduce jobs
Since December 19, 2017, you are charged for MaxCompute MapReduce jobs based on the pay-as-you-go billing method.
Fee for MapReduce jobs of the day = Number of billable hours × Unit price of a MapReduce job (USD per hour)
Item | Unit price |
---|---|
MapReduce job | USD 0.0690 per hour per job |
Number of billable hours of a MapReduce job = Number of hours for which a job runs × Number of CPU cores consumed by the job
For example, if a MapReduce job that runs for 0.5 hours consumes 100 CPU cores, the number of billable hours is 50 based on the following formula: 100 cores × 0.5 hours = 50
.
- The bill is generated before 06:00 on the following day.
- You are not charged for failed MapReduce jobs.
- The queuing time of jobs is not counted in the billable hours.
- If you select the subscription billing method for MaxCompute, you can run MapReduce jobs free of charge within the subscription period.
Pay-as-you-go billing for Spark jobs
Fee for Spark jobs of the day = Number of billable hours × Unit price (USD 0.1041 per hour per job)
Number of billable hours of a Spark job = MAX[Number of CPU cores × Number of hours for which a job runs, ROUND UP(Memory size × Number of hours for which a job runs/4)]
- You must provide the number of CPU cores consumed, number of hours for which a job runs, and memory size.
- One billable hour is equivalent to 1 CPU core and 4 GB of memory.
For example, if a Spark job that runs for 1 hour consumes 2 CPU cores and 5 GB of memory, the number of billable hours is 2 based on the following formula: MAX[2 × 1, ROUND UP(5 × 1/4)] = 2
. If a Spark job that runs for 1 hour consumes 2 CPU cores and 10 GB of memory, the number of billable hours is 3 based on the following formula: MAX[2 × 1, ROUND UP(10 × 1/4)] = 3
.
- The bill is generated before 06:00 on the following day.
- The queuing time of jobs is not counted in the billable hours.
- The fee for similar jobs may vary based on the amount of specified resources.
- If you select the subscription billing method for MaxCompute, you can run Spark jobs free of charge within the subscription period.
Pay-as-you-go billing for Mars jobs
Fee for Mars jobs of the day = Number of billable hours × Unit price (USD 0.1041 per hour per job)
- Calculate the number of CPU cores and memory size that are consumed by the job.
- One billable hour is equivalent to 1 CPU core and 4 GB of memory.
- The number of billable hours of a Mars job is calculated based on the following formula:
MAX[Number of CPU cores × Number of job running hours, ROUND UP(Memory size × Number of job running hours/4)]
.For example, if a Mars job that runs for 1 hour consumes 2 CPU cores and 5 GB of memory, the number of billable hours is 2 based on the following formula:
MAX[2 × 1, ROUND UP(5 × 1/4)] = 2
. If a Mars job that runs for 1 hour consumes 2 CPU cores and 10 GB of memory, the number of billable hours is 3 based on the following formula:MAX[2 × 1, ROUND UP(10 × 1/4)] = 3
.
After a Mars job is run, MaxCompute calculates the billable hours of the job. On the following day, MaxCompute aggregates the fees for all executed Mars jobs into one bill within your Alibaba Cloud account. Then, MaxCompute deducts the fees from the balance of your Alibaba Cloud account.
- The bill is generated before 06:00 on the following day.
- The queuing time of jobs is not counted in the billable hours.
- The fee for similar jobs may vary based on the amount of specified resources.
- If you select the subscription billing method for MaxCompute, you can run Mars jobs free of charge within the subscription period.
Pay-as-you-go billing for MCQA jobs
Since October 1, 2020, you are charged for MCQA jobs based on the pay-as-you-go billing method. For more information, see Overview.
Each time you run an MCQA job, MaxCompute calculates the fee based on the amount of input data of the job. On the following day, MaxCompute aggregates the fees for all executed MCQA jobs.
Fee for an MCQA job = Amount of input data for the MCQA job × Unit price (USD 0.0438 per GB)
- MCQA jobs use dedicated computing resources. If you select the subscription billing method for MaxCompute, MaxCompute calculates the fee based on the amount of data scanned by an MCQA job when you run the MCQA job.
- MaxCompute calculates the fee based on the amount of data scanned by each MCQA job. Each MCQA job scans at least 10 MB of data. You are charged for canceled MCQA jobs based on the amount of data scanned.
- The bill is generated before 06:00 on the following day.
- No fee is generated if no query is performed.
- By default, MaxCompute performs column-oriented storage and compression on data. MaxCompute calculates the amount of scanned data based on the compressed data.
- When you query a partitioned table, you can use partition filtering conditions to reduce the amount of scanned data and improve query performance.
- MCQA is in public preview in the following regions: China (Hong Kong), Singapore, Indonesia (Jakarta), India (Mumbai), and Malaysia (Kuala Lumpur). MCQA is pending release in other regions.