随着社交、电商、金融、零售、物联网等行业的快速发展,现实社会织起了了一张庞大而复杂的关系网,亟需一种支持海量复杂数据关系运算的数据库即图数据库。本系列文章是学习知识图谱以及图数据库相关的知识梳理与总结
本文会包含如下内容:
本篇文章适合人群:架构师、技术专家、对知识图谱与图数据库感兴趣的高级工程师
Nebula Exchange(简称为 Exchange)是一款 Apache Spark 应用,用于在分布式环境中将集群中的数据批量迁移到 Nebula Graph 中,能支持多种不同格式的批式数据和流式数据的迁移。
Exchange 由 Reader、Processor 和 Writer 三部分组成。Reader 读取不同来源的数据返回 DataFrame 后,Processor 遍历 DataFrame 的每一行,根据配置文件中 fields
的映射关系,按列名获取对应的值。在遍历指定批处理的行数后,Writer 会将获取的数据一次性写入到 Nebula Graph 中。下图描述了 Exchange 完成数据转换和迁移的过程。
但官方说如果是红框内的文件,则文件必须在分布式文件系统HDFS上,不支持本地文件,本文给出读取本地文件的示例
spark读取文件本身支持不同的文件类型,详见 Spark中读取本地Windows文件
[file://]代表本地文件路径, 如果是window要话,路径可以是:file:///E:/aa/bb/cc.txt, 或 file:///E:\\aa\\bb\\cc.txt
[hdfs://]代表hdfs文件路径
如果路径没有文件头,spark会将该路径默认添加上"hdfs://"
官方的示例,参见https://docs.nebula-graph.com.cn/nebula-exchange/use-exchange/ex-ug-import-from-csv/
源码中spark的master参数 是通过spark-submit的--master参数传入的,为了方便在IDEA中运行,我们调整源代码,在调用方法时,增加master参数(注意:参数是默认参数,默认为空)
创建ExchangeUtil类,内容是Exchange的源码,但针对代码做了调整,将main中的代码封装为两2个方法:initParam, importData,方便调用。
package com.vesoft.nebula.exchange
import java.io.File
import com.vesoft.nebula.exchange.config._
import com.vesoft.nebula.exchange.processor.{EdgeProcessor, ReloadProcessor, VerticesProcessor}
import com.vesoft.nebula.exchange.reader._
import org.apache.commons.lang3.StringUtils
import org.apache.log4j.Logger
import org.apache.spark.SparkConf
import org.apache.spark.sql.{DataFrame, SparkSession}
/**
* SparkClientGenerator is a simple spark job used to write data into Nebula Graph parallel.
*/
object ExchangeUtil {
private[this] val LOG = Logger.getLogger(this.getClass)
def main(args: Array[String]): Unit = {
val PROGRAM_NAME = "Nebula Graph Exchange"
val (c, configs, spark) = initParam(args, PROGRAM_NAME);
importData(c,configs,spark)
spark.close()
sys.exit(0)
}
def initParam(args: Array[String], programName: String, masterStr: String = ""): (Argument, Configs, SparkSession) = {
val options = Configs.parser(args, programName)
val c: Argument = options match {
case Some(config) => config
case _ =>
LOG.error("Argument parse failed")
sys.exit(-1)
}
val configs = Configs.parse(new File(c.config))
LOG.info(s"Config ${configs}")
val session = SparkSession
.builder()
.appName(programName)
.config("spark.serializer", "org.apache.spark.serializer.KryoSerializer")
.config("spark.sql.shuffle.partitions", "1")
if (StringUtils.isNoneBlank(masterStr)) {
session.master(masterStr)
}
for (key <- configs.sparkConfigEntry.map.keySet) {
session.config(key, configs.sparkConfigEntry.map(key))
}
val sparkConf = new SparkConf()
sparkConf.registerKryoClasses(Array(classOf[com.facebook.thrift.async.TAsyncClientManager]))
// config hive for sparkSession
if (c.hive) {
if (configs.hiveConfigEntry.isEmpty) {
LOG.info("you don't config hive source, so using hive tied with spark.")
} else {
val hiveConfig = configs.hiveConfigEntry.get
sparkConf.set("spark.sql.warehouse.dir", hiveConfig.warehouse)
sparkConf
.set("javax.jdo.option.ConnectionURL", hiveConfig.connectionURL)
.set("javax.jdo.option.ConnectionDriverName", hiveConfig.connectionDriverName)
.set("javax.jdo.option.ConnectionUserName", hiveConfig.connectionUserName)
.set("javax.jdo.option.ConnectionPassword", hiveConfig.connectionPassWord)
}
}
session.config(sparkConf)
if (c.hive) {
session.enableHiveSupport()
}
val spark = session.getOrCreate()
// reload for failed import tasks
if (!c.reload.isEmpty) {
val batchSuccess = spark.sparkContext.longAccumulator(s"batchSuccess.reload")
val batchFailure = spark.sparkContext.longAccumulator(s"batchFailure.reload")
val data = spark.read.text(c.reload)
val processor = new ReloadProcessor(data, configs, batchSuccess, batchFailure)
processor.process()
LOG.info(s"batchSuccess.reload: ${batchSuccess.value}")
LOG.info(s"batchFailure.reload: ${batchFailure.value}")
sys.exit(0)
}
(c, configs, spark);
}
def importData(c:Argument,configs: Configs,spark:SparkSession):Unit={
// import tags
if (configs.tagsConfig.nonEmpty) {
for (tagConfig <- configs.tagsConfig) {
LOG.info(s"Processing Tag ${tagConfig.name}")
val fieldKeys = tagConfig.fields
LOG.info(s"field keys: ${fieldKeys.mkString(", ")}")
val nebulaKeys = tagConfig.nebulaFields
LOG.info(s"nebula keys: ${nebulaKeys.mkString(", ")}")
val data = createDataSource(spark, tagConfig.dataSourceConfigEntry)
if (data.isDefined && !c.dry) {
val batchSuccess =
spark.sparkContext.longAccumulator(s"batchSuccess.${tagConfig.name}")
val batchFailure =
spark.sparkContext.longAccumulator(s"batchFailure.${tagConfig.name}")
val processor = new VerticesProcessor(
repartition(data.get, tagConfig.partition, tagConfig.dataSourceConfigEntry.category),
tagConfig,
fieldKeys,
nebulaKeys,
configs,
batchSuccess,
batchFailure)
processor.process()
if (tagConfig.dataSinkConfigEntry.category == SinkCategory.CLIENT) {
LOG.info(s"batchSuccess.${tagConfig.name}: ${batchSuccess.value}")
LOG.info(s"batchFailure.${tagConfig.name}: ${batchFailure.value}")
}
}
}
} else {
LOG.warn("Tag is not defined")
}
// import edges
if (configs.edgesConfig.nonEmpty) {
for (edgeConfig <- configs.edgesConfig) {
LOG.info(s"Processing Edge ${edgeConfig.name}")
val fieldKeys = edgeConfig.fields
LOG.info(s"field keys: ${fieldKeys.mkString(", ")}")
val nebulaKeys = edgeConfig.nebulaFields
LOG.info(s"nebula keys: ${nebulaKeys.mkString(", ")}")
val data = createDataSource(spark, edgeConfig.dataSourceConfigEntry)
if (data.isDefined && !c.dry) {
val batchSuccess = spark.sparkContext.longAccumulator(s"batchSuccess.${edgeConfig.name}")
val batchFailure = spark.sparkContext.longAccumulator(s"batchFailure.${edgeConfig.name}")
val processor = new EdgeProcessor(
repartition(data.get, edgeConfig.partition, edgeConfig.dataSourceConfigEntry.category),
edgeConfig,
fieldKeys,
nebulaKeys,
configs,
batchSuccess,
batchFailure
)
processor.process()
if (edgeConfig.dataSinkConfigEntry.category == SinkCategory.CLIENT) {
LOG.info(s"batchSuccess.${edgeConfig.name}: ${batchSuccess.value}")
LOG.info(s"batchFailure.${edgeConfig.name}: ${batchFailure.value}")
}
}
}
} else {
LOG.warn("Edge is not defined")
}
// reimport for failed tags and edges
if (ErrorHandler.existError(configs.errorConfig.errorPath)) {
val batchSuccess = spark.sparkContext.longAccumulator(s"batchSuccess.reimport")
val batchFailure = spark.sparkContext.longAccumulator(s"batchFailure.reimport")
val data = spark.read.text(configs.errorConfig.errorPath)
val processor = new ReloadProcessor(data, configs, batchSuccess, batchFailure)
processor.process()
LOG.info(s"batchSuccess.reimport: ${batchSuccess.value}")
LOG.info(s"batchFailure.reimport: ${batchFailure.value}")
}
}
/**
* Create data source for different data type.
*
* @param session The Spark Session.
* @param config The config.
* @return
*/
private[this] def createDataSource(
session: SparkSession,
config: DataSourceConfigEntry
): Option[DataFrame] = {
config.category match {
case SourceCategory.PARQUET =>
val parquetConfig = config.asInstanceOf[FileBaseSourceConfigEntry]
LOG.info(s"""Loading Parquet files from ${parquetConfig.path}""")
val reader = new ParquetReader(session, parquetConfig)
Some(reader.read())
case SourceCategory.ORC =>
val orcConfig = config.asInstanceOf[FileBaseSourceConfigEntry]
LOG.info(s"""Loading ORC files from ${orcConfig.path}""")
val reader = new ORCReader(session, orcConfig)
Some(reader.read())
case SourceCategory.JSON =>
val jsonConfig = config.asInstanceOf[FileBaseSourceConfigEntry]
LOG.info(s"""Loading JSON files from ${jsonConfig.path}""")
val reader = new JSONReader(session, jsonConfig)
Some(reader.read())
case SourceCategory.CSV =>
val csvConfig = config.asInstanceOf[FileBaseSourceConfigEntry]
LOG.info(s"""Loading CSV files from ${csvConfig.path}""")
val reader =
new CSVReader(session, csvConfig)
Some(reader.read())
case SourceCategory.HIVE =>
val hiveConfig = config.asInstanceOf[HiveSourceConfigEntry]
LOG.info(s"""Loading from Hive and exec ${hiveConfig.sentence}""")
val reader = new HiveReader(session, hiveConfig)
Some(reader.read())
case SourceCategory.KAFKA => {
val kafkaConfig = config.asInstanceOf[KafkaSourceConfigEntry]
LOG.info(s"""Loading from Kafka ${kafkaConfig.server} and subscribe ${kafkaConfig.topic}""")
val reader = new KafkaReader(session, kafkaConfig)
Some(reader.read())
}
case SourceCategory.NEO4J =>
val neo4jConfig = config.asInstanceOf[Neo4JSourceConfigEntry]
LOG.info(s"Loading from neo4j config: ${neo4jConfig}")
val reader = new Neo4JReader(session, neo4jConfig)
Some(reader.read())
case SourceCategory.MYSQL =>
val mysqlConfig = config.asInstanceOf[MySQLSourceConfigEntry]
LOG.info(s"Loading from mysql config: ${mysqlConfig}")
val reader = new MySQLReader(session, mysqlConfig)
Some(reader.read())
case SourceCategory.PULSAR =>
val pulsarConfig = config.asInstanceOf[PulsarSourceConfigEntry]
LOG.info(s"Loading from pulsar config: ${pulsarConfig}")
val reader = new PulsarReader(session, pulsarConfig)
Some(reader.read())
case SourceCategory.JANUS_GRAPH =>
val janusGraphSourceConfigEntry = config.asInstanceOf[JanusGraphSourceConfigEntry]
val reader = new JanusGraphReader(session, janusGraphSourceConfigEntry)
Some(reader.read())
case SourceCategory.HBASE =>
val hbaseSourceConfigEntry = config.asInstanceOf[HBaseSourceConfigEntry]
val reader = new HBaseReader(session, hbaseSourceConfigEntry)
Some(reader.read())
case _ => {
LOG.error(s"Data source ${config.category} not supported")
None
}
}
}
/**
* Repartition the data frame using the specified partition number.
*
* @param frame
* @param partition
* @return
*/
private[this] def repartition(frame: DataFrame,
partition: Int,
sourceCategory: SourceCategory.Value): DataFrame = {
if (partition > 0 && !CheckPointHandler.checkSupportResume(sourceCategory)) {
frame.repartition(partition).toDF
} else {
frame
}
}
}
需要将F:/nebula/nebula-web-docker-master/example/ 替换为实际文件的路径。
{
spark: {
app: {
name: Nebula Exchange 2.0
}
driver: {
cores: 1
maxResultSize: 1G
}
executor: {
memory:1G
}
cores {
max: 16
}
}
nebula: {
address:{
graph:["172.25.21.22:9669"]
meta:["172.25.21.22:9559"]
}
user: user
pswd: password
space: mooc
connection {
timeout: 3000
retry: 3
}
execution {
retry: 3
}
error: {
max: 32
output: errors
}
rate: {
limit: 1024
timeout: 1000
}
}
tags: [
{
name: course
type: {
source: csv
sink: client
}
path: "file:///F:/nebula/nebula-web-docker-master/example/mooc-actions/course.csv"
fields: [_c0, _c1]
nebula.fields: [courseId, courseName]
vertex: _c1
separator: ","
header: false
batch: 256
partition: 32
}
{
name: user
type: {
source: csv
sink: client
}
path: "file:///F:/nebula/nebula-web-docker-master/example/mooc-actions/user.csv"
fields: [_c0]
nebula.fields: [userId]
vertex: _c0
separator: ","
header: false
batch: 256
partition: 32
}
]
edges: [
{
name: action
type: {
source: csv
sink: client
}
path: "file:///F:/nebula/nebula-web-docker-master/example/mooc-actions/actions.csv"
fields: [_c0, _c3, _c4, _c5, _c6, _c7, _c8]
nebula.fields: [actionId, duration, feature0, feature1, feature2, feature3, label]
source: _c1
target: _c2
separator: ","
header: false
batch: 256
partition: 32
}
]
}
object ExchangeTest {
private[this] val LOG = Logger.getLogger(this.getClass)
def main(args: Array[String]): Unit = {
testDataImport()
}
def testDataImport(): Unit = {
val conf="E:/gitcodes/nebula/nebula-s" +
"park-utils/nebula-exchange/src/main/resources/mooc_sst_application.conf";
val params : Array[String] = Array("-c",conf)
val PROGRAM_NAME = "Nebula Graph Exchange"
val (c, configs, spark) = ExchangeUtil.initParam(params, PROGRAM_NAME, "local[4]");
ExchangeUtil.importData(c, configs, spark)
spark.close()
sys.exit(0)
}
}
2021-04-20 10:16:15,792 INFO [com.vesoft.nebula.client.graph.net.NebulaPool:105] - Get connection to 172.25.21.22:9669
2021-04-20 10:16:15,799 INFO [com.vesoft.nebula.exchange.GraphProvider:48] - switch space mooc
2021-04-20 10:16:15,805 INFO [com.vesoft.nebula.exchange.writer.NebulaGraphClientWriter:147] - Connection to List(172.25.21.22:9559)
2021-04-20 10:16:17,930 INFO [com.vesoft.nebula.exchange.processor.VerticesProcessor:81] - spark partition for vertex cost time:2-2124
2021-04-20 10:16:17,951 INFO [com.vesoft.nebula.client.graph.net.NebulaPool:105] - Get connection to 172.25.21.22:9669
2021-04-20 10:16:17,953 INFO [com.vesoft.nebula.exchange.GraphProvider:48] - switch space mooc
2021-04-20 10:16:17,956 INFO [com.vesoft.nebula.exchange.writer.NebulaGraphClientWriter:147] - Connection to List(172.25.21.22:9559)
2021-04-20 10:16:17,960 INFO [com.vesoft.nebula.exchange.processor.VerticesProcessor:81] - spark partition for vertex cost time:4-4
2021-04-20 10:16:17,977 INFO [com.vesoft.nebula.client.graph.net.NebulaPool:105] - Get connection to 172.25.21.22:9669
2021-04-20 10:16:17,978 INFO [com.vesoft.nebula.exchange.GraphProvider:48] - switch space mooc
2021-04-20 10:16:17,980 INFO [com.vesoft.nebula.exchange.writer.NebulaGraphClientWriter:147] - Connection to List(172.25.21.22:9559)
2021-04-20 10:16:17,985 INFO [com.vesoft.nebula.exchange.processor.VerticesProcessor:81] - spark partition for vertex cost time:5-5
2021-04-20 10:16:18,012 INFO [com.vesoft.nebula.client.graph.net.NebulaPool:105] - Get connection to 172.25.21.22:9669
2021-04-20 10:16:18,014 INFO [com.vesoft.nebula.exchange.GraphProvider:48] - switch space mooc
2021-04-20 10:16:18,319 INFO [com.vesoft.nebula.exchange.writer.NebulaGraphClientWriter:147] - Connection to List(172.25.21.22:9559)
2021-04-20 10:16:18,325 INFO [com.vesoft.nebula.exchange.processor.VerticesProcessor:81] - spark partition for vertex cost time:6-6
2021-04-20 10:16:18,341 INFO [com.vesoft.nebula.client.graph.net.NebulaPool:105] - Get connection to 172.25.21.22:9669
2021-04-20 10:16:18,343 INFO [com.vesoft.nebula.exchange.GraphProvider:48] - switch space mooc
2021-04-20 10:16:18,346 INFO [com.vesoft.nebula.exchange.writer.NebulaGraphClientWriter:147] - Connection to List(172.25.21.22:9559)
2021-04-20 10:16:18,351 INFO [com.vesoft.nebula.exchange.processor.VerticesProcessor:81] - spark partition for vertex cost time:7-5
2021-04-20 10:16:18,677 INFO [com.vesoft.nebula.client.graph.net.NebulaPool:105] - Get connection to 172.25.21.22:9669
2021-04-20 10:16:18,679 INFO [com.vesoft.nebula.exchange.GraphProvider:48] - switch space mooc
2021-04-20 10:16:18,984 INFO [com.vesoft.nebula.exchange.writer.NebulaGraphClientWriter:147] - Connection to List(172.25.21.22:9559)
2021-04-20 10:16:21,495 INFO [com.vesoft.nebula.client.graph.net.NebulaPool:105] - Get connection to 172.25.21.22:9669
2021-04-20 10:16:21,690 INFO [com.vesoft.nebula.exchange.processor.VerticesProcessor:81] - spark partition for vertex cost time:8-2706
2021-04-20 10:16:21,718 INFO [com.vesoft.nebula.client.graph.net.NebulaPool:105] - Get connection to 172.25.21.22:9669
2021-04-20 10:16:21,720 INFO [com.vesoft.nebula.exchange.GraphProvider:48] - switch space mooc
2021-04-20 10:16:21,722 INFO [com.vesoft.nebula.exchange.writer.NebulaGraphClientWriter:147] - Connection to List(172.25.21.22:9559)
2021-04-20 10:16:21,735 INFO [com.vesoft.nebula.exchange.processor.VerticesProcessor:81] - spark partition for vertex cost time:9-13
2021-04-20 10:16:21,797 INFO [com.vesoft.nebula.exchange.GraphProvider:48] - switch space mooc
2021-04-20 10:16:21,800 INFO [com.vesoft.nebula.exchange.writer.NebulaGraphClientWriter:147] - Connection to List(172.25.21.22:9559)
2021-04-20 10:16:21,805 INFO [com.vesoft.nebula.exchange.processor.VerticesProcessor:81] - spark partition for vertex cost time:0-5
2021-04-20 10:16:24,753 INFO [com.vesoft.nebula.client.graph.net.NebulaPool:105] - Get connection to 172.25.21.22:9669
2021-04-20 10:16:24,823 INFO [com.vesoft.nebula.client.graph.net.NebulaPool:105] - Get connection to 172.25.21.22:9669
2021-04-20 10:16:24,825 INFO [com.vesoft.nebula.exchange.GraphProvider:48] - switch space mooc
2021-04-20 10:16:24,828 INFO [com.vesoft.nebula.exchange.writer.NebulaGraphClientWriter:147] - Connection to List(172.25.21.22:9559)
2021-04-20 10:16:24,833 INFO [com.vesoft.nebula.exchange.processor.VerticesProcessor:81] - spark partition for vertex cost time:11-5
2021-04-20 10:16:25,656 INFO [com.vesoft.nebula.exchange.GraphProvider:48] - switch space mooc
2021-04-20 10:16:25,658 INFO [com.vesoft.nebula.exchange.writer.NebulaGraphClientWriter:147] - Connection to List(172.25.21.22:9559)
2021-04-20 10:16:25,668 INFO [com.vesoft.nebula.exchange.processor.VerticesProcessor:81] - spark partition for vertex cost time:10-10
2021-04-20 10:16:25,681 INFO [com.vesoft.nebula.client.graph.net.NebulaPool:105] - Get connection to 172.25.21.22:9669
文章浏览阅读3.4k次,点赞8次,收藏42次。一、什么是内部类?or 内部类的概念内部类是定义在另一个类中的类;下面类TestB是类TestA的内部类。即内部类对象引用了实例化该内部对象的外围类对象。public class TestA{ class TestB {}}二、 为什么需要内部类?or 内部类有什么作用?1、 内部类方法可以访问该类定义所在的作用域中的数据,包括私有数据。2、内部类可以对同一个包中的其他类隐藏起来。3、 当想要定义一个回调函数且不想编写大量代码时,使用匿名内部类比较便捷。三、 内部类的分类成员内部_成员内部类和局部内部类的区别
文章浏览阅读118次。分布式系统要求拆分分布式思想的实质搭配要求分布式系统要求按照某些特定的规则将项目进行拆分。如果将一个项目的所有模板功能都写到一起,当某个模块出现问题时将直接导致整个服务器出现问题。拆分按照业务拆分为不同的服务器,有效的降低系统架构的耦合性在业务拆分的基础上可按照代码层级进行拆分(view、controller、service、pojo)分布式思想的实质分布式思想的实质是为了系统的..._分布式系统运维工具
文章浏览阅读174次。1.数据源准备2.数据处理step1:数据表处理应用函数:①VLOOKUP函数; ② CONCATENATE函数终表:step2:数据透视表统计分析(1) 透视表汇总不同渠道用户数, 金额(2)透视表汇总不同日期购买用户数,金额(3)透视表汇总不同用户购买订单数,金额step3:讲第二步结果可视化, 比如, 柱形图(1)不同渠道用户数, 金额(2)不同日期..._exce l趋势分析数据量
文章浏览阅读3.3k次。堡垒机可以为企业实现服务器、网络设备、数据库、安全设备等的集中管控和安全可靠运行,帮助IT运维人员提高工作效率。通俗来说,就是用来控制哪些人可以登录哪些资产(事先防范和事中控制),以及录像记录登录资产后做了什么事情(事后溯源)。由于堡垒机内部保存着企业所有的设备资产和权限关系,是企业内部信息安全的重要一环。但目前出现的以下问题产生了很大安全隐患:密码设置过于简单,容易被暴力破解;为方便记忆,设置统一的密码,一旦单点被破,极易引发全面危机。在单一的静态密码验证机制下,登录密码是堡垒机安全的唯一_horizon宁盾双因素配置
文章浏览阅读7.7k次,点赞4次,收藏16次。Chrome作为一款挺不错的浏览器,其有着诸多的优良特性,并且支持跨平台。其支持(Windows、Linux、Mac OS X、BSD、Android),在绝大多数情况下,其的安装都很简单,但有时会由于网络原因,无法安装,所以在这里总结下Chrome的安装。Windows下的安装:在线安装:离线安装:Linux下的安装:在线安装:离线安装:..._chrome linux debian离线安装依赖
文章浏览阅读153次。中国发达城市榜单每天都在刷新,但无非是北上广轮流坐庄。北京拥有最顶尖的文化资源,上海是“摩登”的国际化大都市,广州是活力四射的千年商都。GDP和发展潜力是衡量城市的数字指...
文章浏览阅读3.3k次。前言spark在java使用比较少,多是scala的用法,我这里介绍一下我在项目中使用的代码配置详细算法的使用请点击我主页列表查看版本jar版本说明spark3.0.1scala2.12这个版本注意和spark版本对应,只是为了引jar包springboot版本2.3.2.RELEASEmaven<!-- spark --> <dependency> <gro_使用java调用spark注册进去的程序
文章浏览阅读4.8k次。汽车零部件开发工具巨头V公司全套bootloader中UDS协议栈源代码,自己完成底层外设驱动开发后,集成即可使用,代码精简高效,大厂出品有量产保证。:139800617636213023darcy169_uds协议栈 源代码
文章浏览阅读4.6k次,点赞20次,收藏148次。AUTOSAR基础篇之OS(下)前言首先,请问大家几个小小的问题,你清楚:你知道多核OS在什么场景下使用吗?多核系统OS又是如何协同启动或者关闭的呢?AUTOSAR OS存在哪些功能安全等方面的要求呢?多核OS之间的启动关闭与单核相比又存在哪些异同呢?。。。。。。今天,我们来一起探索并回答这些问题。为了便于大家理解,以下是本文的主题大纲:[外链图片转存失败,源站可能有防盗链机制,建议将图片保存下来直接上传(img-JCXrdI0k-1636287756923)(https://gite_autosar 定义了 5 种多核支持类型
文章浏览阅读2.2k次,点赞6次,收藏14次。原因:自己写的头文件没有被加入到方案的包含目录中去,无法被检索到,也就无法打开。将自己写的头文件都放入header files。然后在VS界面上,右键方案名,点击属性。将自己头文件夹的目录添加进去。_vs2013打不开自己定义的头文件
文章浏览阅读3.3w次,点赞80次,收藏342次。此时,可以将系统中所有用户的 Session 数据全部保存到 Redis 中,用户在提交新的请求后,系统先从Redis 中查找相应的Session 数据,如果存在,则再进行相关操作,否则跳转到登录页面。此时,可以将系统中所有用户的 Session 数据全部保存到 Redis 中,用户在提交新的请求后,系统先从Redis 中查找相应的Session 数据,如果存在,则再进行相关操作,否则跳转到登录页面。当数据量很大时,count 的数量的指定可能会不起作用,Redis 会自动调整每次的遍历数目。_redis命令
文章浏览阅读449次,点赞3次,收藏3次。URP的设计目标是在保持高性能的同时,提供更多的渲染功能和自定义选项。与普通项目相比,会多出Presets文件夹,里面包含着一些设置,包括本色,声音,法线,贴图等设置。全局只有主光源和附加光源,主光源只支持平行光,附加光源数量有限制,主光源和附加光源在一次Pass中可以一起着色。URP:全局只有主光源和附加光源,主光源只支持平行光,附加光源数量有限制,一次Pass可以计算多个光源。可编程渲染管线:渲染策略是可以供程序员定制的,可以定制的有:光照计算和光源,深度测试,摄像机光照烘焙,后期处理策略等等。_urp渲染管线