久久福利_99r_国产日韩在线视频_直接看av的网站_中文欧美日韩_久久一

您的位置:首頁技術(shù)文章
文章詳情頁

SQLServer高效解析JSON格式數(shù)據(jù)的實(shí)例過程

瀏覽:141日期:2023-03-06 14:25:40

1. 背景

最近碰到個(gè)需求,源數(shù)據(jù)存在posgtreSQL中,且為JSON格式。那如果在SQLServer中則 無法直接使用,需要先解析成表格行列結(jié)構(gòu)化存儲(chǔ),再?gòu)?fù)用。

樣例數(shù)據(jù)如下

‘[{“key”:“2019-01-01”,“value”:“4500.0”},{“key”:“2019-01-02”,“value”:“4500.0”},{“key”:“2019-01-03”,“value”:“4500.0”},{“key”:“2019-01-04”,“value”:“4500.0”},{“key”:“2019-01-05”,“value”:“4500.0”},{“key”:“2019-01-06”,“value”:“4500.0”},{“key”:“2019-01-07”,“value”:“4500.0”},{“key”:“2019-01-08”,“value”:“4500.0”},{“key”:“2019-01-09”,“value”:“4500.0”},{“key”:“2019-01-10”,“value”:“4500.0”},{“key”:“2019-01-11”,“value”:“4500.0”},{“key”:“2019-01-12”,“value”:“4500.0”},{“key”:“2019-01-13”,“value”:“4500.0”},{“key”:“2019-01-14”,“value”:“4500.0”},{“key”:“2019-01-15”,“value”:“4500.0”},{“key”:“2019-01-16”,“value”:“4500.0”},{“key”:“2019-01-17”,“value”:“4500.0”},{“key”:“2019-01-18”,“value”:“4500.0”},{“key”:“2019-01-19”,“value”:“4500.0”},{“key”:“2019-01-20”,“value”:“4500.0”},{“key”:“2019-01-21”,“value”:“4500.0”},{“key”:“2019-01-22”,“value”:“4500.0”},{“key”:“2019-01-23”,“value”:“4500.0”},{“key”:“2019-01-24”,“value”:“4500.0”},{“key”:“2019-01-25”,“value”:“4500.0”},{“key”:“2019-01-26”,“value”:“4500.0”},{“key”:“2019-01-27”,“value”:“4500.0”},{“key”:“2019-01-28”,“value”:“4500.0”},{“key”:“2019-01-29”,“value”:“4500.0”},{“key”:“2019-01-30”,“value”:“4500.0”},{“key”:“2019-01-31”,“value”:“4500.0”}]’

研究了下方法,可以先將 JSON串 拆成獨(dú)立的 key-value對(duì),再來對(duì)key-value子串做截取,獲取兩列數(shù)據(jù)值。

2. 拆串-拆分JSON串至key-value子串

這里主要利用行號(hào)和分隔符來組合完成拆分的功能。
參考如下樣例。
主要利用連續(xù)數(shù)值作為索引(起始值為1),從源字符串每個(gè)位置截取長(zhǎng)度為1(分隔符的長(zhǎng)度)的字符,如果為分隔符,則為有效的、待處理的記錄。有點(diǎn)類似于生物DNA檢測(cè)中的鳥槍法,先廣撒網(wǎng),再根據(jù)標(biāo)記識(shí)別、追蹤。

/*
 * Date   : 2020-07-01
 * Author : 飛虹
 * Sample : 拆分 指定分割符的字符串為單列多值
 * Input  : 字符串"jun,cong,haha"
 * Output : 列,值為 "jun", "cong", "haha"
 */
declare @s nvarchar(500) = "jun,cong,haha"
			,@sep nvarchar(5) = ",";
with cte_Num as (
	select 1 as n
	union all
	select n+1 n from cte_Num where n<100
)
select d.s, a.n 
		  ,n-len(replace(left(s, n), @sep, "")) + 1 as pos,
		  CHARINDEX(@sep, s+@sep, n),
  substring(s, n, CHARINDEX(@sep, s+@sep, n)-n) as element
from (select @s as s) as d
 join cte_Num a 
 on
	 n<=len(s) and 
 substring(@sep+s, n, 1) = @sep

3. 取值-創(chuàng)建函數(shù)截取key-value串的值

基于第2步的結(jié)果,可以將JSON長(zhǎng)串拆分為 key-value字符串,如 “2020-01-01”:“98.99”。到這一步,就好辦了。既可以自己寫表值函數(shù)來返回結(jié)果,也可以直接通過substring來截取。這里開發(fā)一個(gè)表值函數(shù),來進(jìn)行封裝。

 /*
  *******************************************************************************
  *     Date : 2020-07-01
  *   Author : 飛虹
  *     Note : 利用patindex正則匹配字符,在while中對(duì)字符進(jìn)行逐個(gè)匹配、替換為空。
  * Function : getDateAmt
  *   Input  : key-value字符串,如 "2020-01-01":"98.99"
  *   Output : Table類型(日期列,數(shù)值列)。值為 2020-01-01, 98.99 
  *******************************************************************************
 */
 CREATE FUNCTION dbo.getDateAmt(@S VARCHAR(100))
 RETURNS   @tb_rs table(dt date, amt decimal(28,14)) 
 AS
 BEGIN
	 WHILE PATINDEX("%[^0-9,-.]%",@S) > 0
		 BEGIN
			 -- 匹配:去除非數(shù)字 、頓號(hào)、橫線 的字符
 			 set @s=stuff(@s,patindex("%[^0-9,-.]%",@s),1,"")
		 END
		 insert into @tb_rs 
			select SUBSTRING(@s,1,charindex(",",@s)-1)
				 , substring(@s,charindex(",",@s)+1, len(@s) )
		return
  END
 GO
 
 --測(cè)試
 select  * from DBO.getDateAmt("{"key":"2019-01-01","value":"4500.0"")
 

4. 完整樣例

附上完整腳本樣例,全程CTE,直接查詢,預(yù)覽效果。

;with cte_t1 as (
			select * from 
			( values("jun","[{"key":"2019-01-01","value":"4500.0"},{"key":"2019-01-02","value":"4500.0"},{"key":"2019-01-03","value":"4500.0"},{"key":"2019-01-04","value":"4500.0"},{"key":"2019-01-05","value":"4500.0"},{"key":"2019-01-06","value":"4500.0"},{"key":"2019-01-07","value":"4500.0"},{"key":"2019-01-08","value":"4500.0"},{"key":"2019-01-09","value":"4500.0"},{"key":"2019-01-10","value":"4500.0"},{"key":"2019-01-11","value":"4500.0"},{"key":"2019-01-12","value":"4500.0"},{"key":"2019-01-13","value":"4500.0"},{"key":"2019-01-14","value":"4500.0"},{"key":"2019-01-15","value":"4500.0"},{"key":"2019-01-16","value":"4500.0"},{"key":"2019-01-17","value":"4500.0"},{"key":"2019-01-18","value":"4500.0"},{"key":"2019-01-19","value":"4500.0"},{"key":"2019-01-20","value":"4500.0"},{"key":"2019-01-21","value":"4500.0"},{"key":"2019-01-22","value":"4500.0"},{"key":"2019-01-23","value":"4500.0"},{"key":"2019-01-24","value":"4500.0"},{"key":"2019-01-25","value":"4500.0"},{"key":"2019-01-26","value":"4500.0"},{"key":"2019-01-27","value":"4500.0"},{"key":"2019-01-28","value":"4500.0"},{"key":"2019-01-29","value":"4500.0"},{"key":"2019-01-30","value":"4500.0"},{"key":"2019-01-31","value":"4500.0"}]")
				   ,("congc","[{"key":"2019-01-01","value":"347.82608695652175"},{"key":"2019-01-02","value":"347.82608695652175"},{"key":"2019-01-03","value":"347.82608695652175"},{"key":"2019-01-04","value":"347.82608695652175"},{"key":"2019-01-07","value":"347.82608695652175"},{"key":"2019-01-08","value":"347.82608695652175"},{"key":"2019-01-09","value":"347.82608695652175"},{"key":"2019-01-10","value":"347.82608695652175"},{"key":"2019-01-11","value":"347.82608695652175"},{"key":"2019-01-14","value":"347.82608695652175"},{"key":"2019-01-15","value":"347.82608695652175"},{"key":"2019-01-16","value":"347.82608695652175"},{"key":"2019-01-17","value":"347.82608695652175"},{"key":"2019-01-18","value":"347.82608695652175"},{"key":"2019-01-21","value":"347.82608695652175"},{"key":"2019-01-22","value":"347.82608695652175"},{"key":"2019-01-23","value":"347.82608695652175"},{"key":"2019-01-24","value":"347.82608695652175"},{"key":"2019-01-25","value":"347.82608695652175"},{"key":"2019-01-28","value":"347.82608695652175"},{"key":"2019-01-29","value":"347.82608695652175"},{"key":"2019-01-30","value":"347.82608695652175"},{"key":"2019-01-31","value":"347.82608695652175"}]")
			) as t(name, jsonStr)
)   , cte_rn as (
				select 1 as rn 
				union all
				select rn+1 from cte_rn where rn < 1000
	)  
	, cte_splitJson as (
    			SELECT  a.name
 							  ,replace(replace(a.jsonStr,"[",""),"]","") as jsonStr
 	 						  ,substring(replace(replace(a.jsonStr,"[",""),"]","")
											, b1.rn
											, charindex("},", replace(replace(a.jsonStr,"[",""),"]","")+"},", b1.rn)-b1.rn ) as value_json
 	   			from cte_t1 a
 					cross join cte_rn b1 
 				where  substring("},"+replace(replace(a.jsonStr,"[",""),"]",""), rn, 2) = "},"
 	)
	select *  
  	from cte_splitJson a
		cross apply dbo.getDateAmt(a.value_json) as t1 
	-- 注意這里生成行號(hào)時(shí), 需要設(shè)置默認(rèn)遞歸次數(shù)
	option(maxrecursion 0)

5. 問題

經(jīng)過在個(gè)人普通配置PC實(shí)測(cè),性能有點(diǎn)堪憂,耗時(shí):數(shù)據(jù)量 約為15mins:50W ,不太能接受。有興趣或者經(jīng)歷過的伙伴,出手來協(xié)助, 怎么提高效率,或者來個(gè)新方案?

到此這篇關(guān)于SQLServer高效解析JSON格式數(shù)據(jù)的文章就介紹到這了,更多相關(guān)SQLServer解析JSON數(shù)據(jù)內(nèi)容請(qǐng)搜索以前的文章或繼續(xù)瀏覽下面的相關(guān)文章希望大家以后多多支持!

標(biāo)簽: MsSQL
主站蜘蛛池模板: 亚洲美女视频在线观看 | 天天操天天曰 | 国内精品视频一区二区三区八戒 | 欧美日韩一区二区视频在线观看 | 日韩在线不卡 | 黄色在线观看网址 | av色资源| 99久久久精品 | av一区二区在线观看 | 中文字幕第一页在线 | 99热手机在线观看 | 亚洲欧美一区二区三区在线 | 欧美国产日韩另类 | 一级高清| 国产成人高清 | 成人国产一区二区 | 亚洲精品视频在线 | 久久久久久久中文 | 在线成人av | 日韩91| 综合网视频 | 日韩精品一区在线视频 | 91精品一区| 国产视频久久久久久久 | 国产美女精品视频免费观看 | 欧美日韩不卡合集视频 | 日本亚洲一区 | 久久久久久久 | 玖玖免费| 漂亮少妇videoshd忠贞 | 中文字幕一二三区 | www.久久久 | 欧美日本一区 | 久久免费小视频 | 免费看一区二区三区 | 日韩视频免费看 | 天天天天天天操 | 日韩在线免费观看网站 | 国产在线观看一区二区 | 国产视频一区二区在线 | 国产 亚洲 网红 主播 |