Closed YamatoSecurity closed 2 months ago
@YamatoSecurity Yes I like this :) www I would love to implement it💪
Depending on the difficulty of implementation, I may create separate pull requests for csv and json!
Timestamp: "%Timestamp%"
RuleTitle: "%RuleTitle%"
Level: "%Level%"
Computer: "%Computer%"
Channel: "%Channel%"
EventID: "%EventID%"
RecordID: "%RecordID%"
Details: "%Details%"
ExtraFieldInfo: "%ExtraFieldInfo%"
#[derive(Clone, Debug)]
pub struct EvtxRecordInfo {
pub evtx_filepath: String, // イベントファイルのファイルパス ログで出力するときに使う
pub record: Value, // 1レコード分のデータをJSON形式にシリアライズしたもの
pub data_string: String, //1レコード内のデータを文字列にしたもの
pub key_2_value: HashMap<String, String>, // 階層化されたキーを.でつないだデータとその値のマップ
pub recovered_record: bool, // レコードが復元されたかどうか
}
#[derive(Debug, Clone, PartialEq, Eq, Default)]
pub struct DetectInfo {
pub detected_time: DateTime<Utc>,
pub rulepath: CompactString,
pub ruleid: CompactString,
pub ruletitle: CompactString,
pub level: CompactString,
pub computername: CompactString,
pub eventid: CompactString,
pub detail: CompactString,
pub ext_field: Vec<(CompactString, Profile)>,
pub is_condition: bool,
pub details_convert_map: HashMap<CompactString, Vec<CompactString>>,
}
Provider, EID, Details
Microsoft-Windows-Security-Auditing, 4624, Type: %LogonType% ¦ TgtUser: %TargetUserName% ¦ SrcComp: %WorkstationName% ¦ SrcIP: %IpAddress% ¦ LID: %TargetLogonId%
MsiInstaller, 1034, Product: %Data[1]% ¦ Ver: %Data[2]% ¦ Vendor: %Data[5]% ¦ Status: %Data[4]%
Service Control Manager, 7031, Svc: %param1% ¦ CrashCount: %param2% ¦ Action: %param5%
detections/detection.rs#create_log_record
fn create_log_record(
rule: &RuleNode,
record_info: &EvtxRecordInfo,
stored_static: &StoredStatic,
) -> DetectInfo {
...
let mut profile_converter: HashMap<&str, Profile> = HashMap::new();
detections/message.rs#create_message
pub fn create_message(
event_record: &Value,
output: CompactString,
mut detect_info: DetectInfo,
profile_converter: &HashMap<&str, Profile>,
(is_agg, is_json_timeline): (bool, bool),
(eventkey_alias, field_data_map_key, field_data_map): (
&EventKeyAliasConfig,
&FieldDataMapKey,
&Option<FieldDataMap>,
),
) -> DetectInfo {
...
detections/message.rs#parse_message
pub fn parse_message(
event_record: &Value,
output: &CompactString,
eventkey_alias: &EventKeyAliasConfig,
json_timeline_flag: bool,
field_data_map_key: &FieldDataMapKey,
field_data_map: &Option<FieldDataMap>,
) -> (CompactString, Vec<CompactString>) {
...
let mut hash_map: Vec<(CompactString, Vec<CompactString>)> = vec![];
let details_key: Vec<&str> = output.split(" ¦ ").collect();
for caps in ALIASREGEX.captures_iter(&return_message) {
...
let suffix_match = SUFFIXREGEX.captures(target_str);
let suffix: i64 = match suffix_match {
...
let mut details_key_and_value: Vec<CompactString> = vec![];
for (k, v) in hash_map.iter() {
// JSON出力の場合は各種のaliasを置き換える処理はafterfactの出力用の関数で行うため、ここでは行わない
if !json_timeline_flag {
return_message = CompactString::new(return_message.replace(k.as_str(), v[0].as_str()));
}
for detail_contents in details_key.iter() {
if detail_contents.contains(k.as_str()) {
let key = detail_contents.split_once(": ").unwrap_or_default().0;
details_key_and_value.push(format!("{}: {}", key, v[0]).into());
break;
}
}
}
(return_message, details_key_and_value)
detections/message.rs#parse_message
let recinfo =
utils::create_recordinfos(event_record, field_data_map_key, field_data_map);
detections/utils.rs#_collect_recordinfo
if arr_index > 0 {
let (field_data_map, field_data_map_key) = filed_data_converter;
let i = arr_index + 1;
let field = format!("{parent_key}[{i}]",).to_lowercase();
if let Some(map) = field_data_map {
let converted_str = convert_field_data(
map,
field_data_map_key,
field.as_str(),
strval.as_str(),
org_value,
);
if let Some(converted_str) = converted_str {
strval = converted_str.to_string();
}
}
}
output.insert((parent_key.to_string(), strval));
@fukusuket Sorry I think this is going to be a difficult issue, but I think you will like it. 😉 Please let me know if you are interested in implementing it.
Right now, all of the unnamed
Data
fields get outputted as the sameData
so first it is hard to tell which position they are in without looking up the original XML data in Event Viewer, etc... Second, using arrays will cause problems when importing into Elastic and maybe other SIEMs. Third, the position will change depending on whether fields are defined inDetails
or not, making extraction difficult as the index number may change suddenly.To fix these problems, I want to output the
Data
fields like they are used for when extracting data.For example:
./target/release/hayabusa csv-timeline -d ../hayabusa-sample-evtx -w -r rules/hayabusa/builtin/System/Sys_6009_Info_ComputerStartup.yml
This rule uses
details: 'MajorVer: %Data[1]% ¦ BuildNum: %Data[2]%'
so only will get data from the first and second fields. However, there is more data:Right now, the output looks like this:
But I want to change it to:
For JSON timelines (Example:
./target/release/hayabusa json-timeline -d ../hayabusa-sample-evtx -w -r rules/hayabusa/builtin/System/Sys_6009_Info_ComputerStartup.yml
)Before:
After:
What do you think?