Open enternoescape opened 7 years ago
Note that any changes I'm making are only activated when a specific property is set, so I'm not doing anything that people need to live with if it's goes wrong for now.
I was thinking that when you're splitting shows like this, you would want Scheduler to order them in order of where the start time padding actually places the start of a recording. Will changing this sort order cause any non-obvious problems?
Do you intend on adding packet filtering to this splitter at some point, to deal with the 'multiple channels from same stream' desire some have? So each virtual file would include the program number as well, and the splitter only put out the appropriate packets to that file (potentially altering/regenerating PMT/PAT's as appropriate)?
I first just want to get things working with this "simple" configuration. I just got a good grasp on how Scheduler works yesterday afternoon (no changes needed since it doesn't sort the schedule in any obvious way). I'm still getting familiar with how Seeker works. I think you're asking about having the media server do packet filtering. I don't know that we want to pull all of that kind of activity into the JVM. One of the nice things about the current implementation is that most of the data going through the media server stays in native memory and never touches the heap which is a performance saver and keeps object creation down substantially. Putting a filter in place that may be reading about 4 out of every 188 bytes will lose this advantage for us. In a 64-bit JVM this isn't a big deal. In a 32-bit JVM, it matters a little more. In both cases it's slower.
You could then argue that we should be using the remuxer for everything and just tell it what program we need, but that's not a great solution either as you have seen for some users it still has problems that need to be fixed. Plus one of those problems is the situation whereby we don't have a decrypted PAT and/or PMT.
It likely wouldn't be a problem to have an option to specify the filtering of specific PID's by the media server as a convenience (not by program and if you have all the right PID's regenerating the PAT isn't needed), but I think the encoder should be expected to do most of the program splitting since it's usually in a better position to determine what is what and also it should result in less traffic to the media server. I would expect that simple network encoders are not going to even attempt this functionality.
I think I am partly missing one of your points. I actually wanted to modify START, STOP and SWITCH to be able to indicate what's going on with the starting of programs and stopping of programs. It will pretty much use the same channel "numbers," but the network encoder will know that it is to start streaming the program for that channel and not stop any other programs in progress. We would also have a more inclusive STOP that means everything stops to catch any encoders that are misbehaving or didn't process a request for some reason.
I'm not sure if I want to try to modify the built in native capture device support for this. I am strongly considering integrating a large chunk of the best parts of OpenDCT into SageTV if I can get all of these parts working because it will make it possible to do these things at least with the HDHomeRun and InfiniTV devices. My only real hangup is that the remuxer still needs work to deal with bad cable companies.
So after working with Seeker a little to get it to deal with multiple files associated with an encoder at once, I am going to interface the Seeker out to another class I'll call Hunter (I think it's close enough we aren't going to get confused about why it's there). While it's not impossible to get this all packed in and switch between the current Seeker behavior and the new Seeker behavior, it's starting to look too messy for my liking and messy often leads to hard to track down mistakes. It will be much easier to only need to follow one code path versus one that occasionally does something different. This is probably better long term too since I'm going to work on cleaning up all synchronization with read/write locks where it makes sense in my version of Seeker and I will be pulling classes out of it too so we can be sure of how we are accessing things and we can ensure that it's always safe.
You may want to first refactor Seeker into more classes that split up the behavior better. There's LOTS of things going on in Seeker that could definitely be done elsewhere (such as library importing, drive mappings, etc.). Your call though...you've obviously looked into this very deeply already as to what'd be an efficient way to code it all up. :)
One thing I wasn't entirely clear on from your description is why you're using these virtual filenames that don't match what the normal generated filenames are. If you're just splitting them in the MediaServer into the files that actually need to be written by way of that virtual file...then that makes sense. But what doesn't make sense is if the encoder isn't going to be using the MediaServer, then it'll be writing to some other generated file path and not the proper one based on SageTV's naming convention (and it's somewhat important to keep the naming convention, it's relied on heavily for recovering recordings when things go wrong).
On Fri, Feb 3, 2017 at 8:43 AM, Joseph Shuttlesworth < notifications@github.com> wrote:
So after working with Seeker a little to get it to deal with multiple files associated with an encoder at once, I am going to interface the Seeker out to another class I'll call Hunter (I think it's close enough we aren't going to get confused about why it's there). While it's not impossible to get this all packed in and switch between the current Seeker behavior and the new Seeker behavior, it's starting to look too messy for my liking and messy often leads to hard to track down mistakes. It will be much easier to only need to follow one code path versus one that occasionally does something different. This is probably better long term too since I'm going to work on cleaning up all synchronization with read/write locks where it makes sence in my version of Seeker and I will be pulling classes out of it too so we can be sure of how we are accessing things and we can ensure that it's always safe.
— You are receiving this because you are subscribed to this thread. Reply to this email directly, view it on GitHub https://github.com/google/sagetv/issues/259#issuecomment-277297720, or mute the thread https://github.com/notifications/unsubscribe-auth/ANEIDI6-oHCfhgNZeGFHsJ9OcwnglXkpks5rY1k3gaJpZM4L0Y1m .
-- Jeffrey Kardatzke jkardatzke@google.com Google, Inc.
I am definitely breaking things up a bit. I was surprised as just how much that class it doing that didn't seem like things that class should be doing. I'm going to try to break up the tasks and I'm also adding some classes to help make sure we don't end up with things working by way of the out-of-thin-air principal.
If an encoder writes to the virtual file, that virtual file only exists until the next SWITCH and then it is removed. It's purpose is to be copied as it's written to all of the other associated actual recordings. I didn't mention that it would be deleted when done. You did make me think of the fact that it could possibly end up being imported. I would add a folder inside of the recording folders that's only created if an encoder actually uses the virtual file and any file in that folder with a name that will convert from hex to a valid integer is always ignored. I had not thought of the possibility that the file doesn't get cleaned up and SageTV tries to import it as a recording. I'd probably add one more check that the filename converts from hex into a valid integer, but I think that would resolve that possibility without needing to rely on the information being saved somewhere. Like I said earlier, it's really just a shim and may not get used much, but man is it taking significant effort to deal with it. :)
I would be a little concerned with the performance penalty when an encoder doesn't use the MediaServer and is then writing to this virtual file which you then have to copy to the actual file location...so it's doubling up on file writes (and adding another read) when it shouldn't have to.
You should also cover cases where SageTV unexpectedly exits...not that there's known causes of crashing...but power outages do occur, so you'll want to cleanup anything in this virtual file folder you have if that happens.
One other thing to watch for is all over the codebase it's relying on the fact that a single encoder is only writing to a single file at any time.
The Scheduler changes to doing multiple overlapping files at once isn't going to be that ugly...the real nasty part comes with modifying Seeker. :) I am very impressed you are trying to take on a task like this though.
On Fri, Feb 3, 2017 at 1:43 PM, Joseph Shuttlesworth < notifications@github.com> wrote:
I am definitely breaking things up a bit. I was surprised as just how much that class it doing that didn't seem like things that class should be doing. I'm going to try to break up the tasks and I'm also adding some classes to help make sure we don't end up with things working by way of the out-of-thin-air principal.
If an encoder writes to the virtual file, that virtual file only exists until the next SWITCH and then it is removed. It's purpose is to be copied as it's written to all of the other associated actual recordings. I didn't mention that it would be deleted when done. You did make me think of the fact that it could possibly end up being imported. I would add a folder inside of the recording folders that's only created if an encoder actually uses the virtual file and any file in that folder with a name that will convert from hex to a valid integer is always ignored. I had not thought of the possibility that the file doesn't get cleaned up and SageTV tries to import it as a recording. I'd probably add one more check that the filename converts from hex into a valid integer, but I think that would resolve that possibility without needing to rely on the information being saved somewhere. Like I said earlier, it's really just a shim and may not get used much, but man is it taking significant effort to deal with it. :)
— You are receiving this because you commented. Reply to this email directly, view it on GitHub https://github.com/google/sagetv/issues/259#issuecomment-277370147, or mute the thread https://github.com/notifications/unsubscribe-auth/ANEIDM6Qi6-t4JaaFhoK3YbbyupCGlESks5rY59kgaJpZM4L0Y1m .
-- Jeffrey Kardatzke jkardatzke@google.com Google, Inc.
I was very concerned about that too and that's why I put so many details about performance impact etc. in my first post. I don't like the compromise all that much either, but something had to be done to account for what a network encoder that chooses to not use the media server, but supports switching might do and I couldn't have it writing to an actual recording because I don't want to end up splitting a recording up in any strange ways due to the fact that it needs to connect with a different filename each time or we have no indication that a transition has happened or where the cut off is. If the encoder doesn't support switching, then it will be handled pretty much the "normal" way, it will be given a normal filename and it will lose the padding unless we can come up with some creative way to keep the padding for some capture devices and get rid of it for ones that can overlap.
I know the multiple recordings at the same time on one tuner will cause some chaos. I'm still learning all of the interesting situations that can put us in and working out how to resolve each one of them. I'm also aware that live playback could be interesting when the ending padding doesn't line up with any starting padding, so that's something else I've been thinking about.
I don't quite understand why the splitter doesn't just write to each file it needs to directly, instead of to a temporary file and then copied.
You're misunderstanding how this works.
If the media server isn't being used by the network encoder, we don't have any other options. I am not writing this just to support OpenDCT. There are a handful of other network encoders out there still in use and I do not want to alienate those people from still getting a good experience. I am considering adding logic to attempt to allow transitioning even for network encoders that would otherwise not know how to do it, but we are not going to be able to get around a file being written and needing to read that file into other files in those cases.
IF the media server is being used, things are very efficient and no temporary file is ever written.
IF the media server is NOT being used, things are less efficient and a temporary file is created.
The temporary file, will be the least used part of this entire implementation and unfortunately it will also be a necessary evil for many network encoders that will likely never receive support for the media server. The problem with for example just giving it the next recording filename is that if we have overlapping padding we don't know what the encoder might do. It might append to the file (best case), it might overwrite it from the beginning (worse case). You could say can't we just copy the padding later, but I think that would be even more intense for the disk than splitting in real-time and it has the disadvantage of users wondering why sometimes they don't see the padding.
To solve the file overwriting problem, just use the multiple segment feature of MediaFile. Then file writes would always start from the beginning. This does require sending more commands to the encoder to do the switching, but avoids having to use an extra temporary file.
There is one other thing I'd have to fix, which isn't hard to do, which would allow seamless playback switching between consecutive segments on a MediaFile in this case since currently it will never seamless switch between segments of a MediaFile because they were always discontinuous previously.
Also, are you planning on making this work for non network encoders too? They basically behave the same as network encoders that don't use the media server.
Jeff Kardatzke Sent from my Android
On Feb 3, 2017 9:37 PM, "Joseph Shuttlesworth" notifications@github.com wrote:
If the media server isn't being used by the network encoder, we don't have any other options. I am not writing this just to support OpenDCT. There are a handful of other network encoders out there still in use and I do not want to alienate those people from still getting a good experience. I am considering adding logic to attempt to allow transitioning even for network encoders that would otherwise not know how to do it, but we are not going to be able to get around a file being written and needing to read that file into other files in those cases.
IF the media server is being used, things are very efficient and no temporary file is ever written.
IF the media server is NOT being used, things are less efficient and a temporary file is created.
The temporary file, will be the least used part of this entire implementation and unfortunately it will also be a necessary evil for many network encoders that will likely never receive support for the media server. The problem with for example just giving it the next recording filename is that if we have overlapping padding we don't know what the encoder might do. It might append to the file (best case), it might overwrite it from the beginning (worse case). You could say can't we just copy the padding later, but I think that would be even more intense for the disk than splitting in real-time and it has the disadvantage of users wondering why sometimes they don't see the padding.
— You are receiving this because you commented.
Reply to this email directly, view it on GitHub https://github.com/google/sagetv/issues/259#issuecomment-277420858, or mute the thread https://github.com/notifications/unsubscribe-auth/ANEIDAh7qlLow3fsvmV_6v2CiAeQT9a-ks5rZA6MgaJpZM4L0Y1m .
Maybe multi-segment would help for network encoders that can't transition. I didn't forget about that option, I just don't like forcing multi-segment recordings when there wasn't anything wrong with the recording. In my mind the only reason you get those is if the recording had to be restarted because the encoder stalled. I very strongly want this feature to be normal in all appearances to the user and not generate strange questions. Also in this case, I am talking about network encoders that would require a STOP then START to transition, so you're still going to have a break in the stream on any of those. If I didn't need to consider that people are still using those for unique discontinued devices such as the R5000-HD, it would be easier for me to just say we're not supporting them; you need to upgrade. Maybe I'll come up with something a little more clever here, but for these STOP/START situations it's always going to be a little rocky. I am taking in all of the suggestions and I appreciate the feedback.
Seamless switching between segments will likely not be needed because the normal case where that might happen will not in reality be seamless anyway. Best case there will still be a 2-10 second gap similar to how it used to be prior to the ability to seamlessly transition. I'm still hammering out the details on this one. There might be a situation where the network encoder supports SWITCH, but decides to not use the media server. Then this kind of transitioning would be warranted.
I didn't say it directly, but when I say encoder, I am talking about all encoders (not just network encoders). The network encoders are the only ones that might not be able to use the media server, so they are the only ones that could be a pain to support. I plan on passing the stv:// string to the native encoders so that they will route through the media server and use the splitter.
As a result of pulling things apart I have ended up creating another singleton I'm calling Library. The saved preferences still fall under seeker/ for compatibility purposes. This handles all of the library scanning functions and general file management that was being done in Seeker. I will be adding an intermediary interface so that you can seamlessly switch between the old seeker and the new seeker + library configuration without tons of branching (Hunter). I'm also trying to make the steps used to start/stop/transition a recording more modular so we can reduce some code duplication.
I see code in seeker that likely is unnecessary regarding FAT32 filesystem limitations and it's definitely not transition friendly. Do we still want to support filesystems that are sub-optimal for recordings? Since I am adding support for multiple files from one encoder this could end up being a little clunky because the easiest thing to do is when any of the files hit 3.5GB, we stop the encoder, then do rollSegment on all of the files, then start the encoder back up.
You can definitely drop the FAT32 thing as well as other unused legacy stuff, subject to my review of course ☺️
Jeff Kardatzke Sent from my Android
On Feb 7, 2017 6:49 AM, "Joseph Shuttlesworth" notifications@github.com wrote:
As a result of pulling things apart I have ended up creating another singleton I'm calling Library. The saved preferences still fall under seeker/ for compatibility purposes. This handles all of the library scanning functions and general file management that was being done in Seeker. I will be adding an intermediary interface so that you can seamlessly switch between the old seeker and the new seeker + library configuration without tons of branching (Hunter). I'm also trying to make the steps used to start/stop/transition a recording more modular so we can reduce some code duplication.
I see code in seeker that likely is unnecessary regarding FAT32 filesystem limitations and it's definitely not transition friendly. Do we still want to support filesystems that are sub-optimal for recordings? Since I am adding support for multiple files from one encoder this could end up being a little clunky because the easiest thing to do is when any of the files hit 3.5GB, we stop the encoder, then do rollSegment on all of the files, then start the encoder back up.
— You are receiving this because you commented. Reply to this email directly, view it on GitHub https://github.com/google/sagetv/issues/259#issuecomment-278021382, or mute the thread https://github.com/notifications/unsubscribe-auth/ANEIDDZ1eV6dj05e-aoNbLP4u1lglE_Dks5raISBgaJpZM4L0Y1m .
I'll be asking before removing things wholesale. I don't want to waste a lot of time on things we don't really need though. I assume it's also ok to remove anything that only happens when EMBEDDED is set?
It's really funny when I think about what's happening because I'm dealing with the possibility of multiple files being written at the same time, but the way it's being handled can really just scale as high as your hardware will allow. In reality we'll have 3 recordings coming from one encoder on a bad day and that might only be for a few minutes.
I also have worked out some logic over what recording you'll see in instances where we could be recording multiple things, but the UI or some other "legacy" interface wants to know which one the requested encoder is doing. It first tries to get the one associated with the airing in the current time slot (as if it didn't have padding), if we are outside of that because we are recording all padding at the moment, it picks the one that will be ending soonest. I will after everything gets the green light be adding new API methods that deal with the multiple recordings situation so that we can update the UI and maybe if we get a new web interface, it can use the new API's too.
Sounds good, and yeah anything with EMBEDDED can be removed.
Jeff Kardatzke Sent from my Android
On Feb 7, 2017 12:39 PM, "Joseph Shuttlesworth" notifications@github.com wrote:
It's really funny when I think about what's happening because I'm dealing with the possibility of multiple files being written at the same time, but the way it's being handled can really just scale as high as your hardware will allow. In reality we'll have 3 recordings coming from one encoder on a bad day and that might only be for a few minutes.
I also have worked out some logic over what recording you'll see in instances where we could be recording multiple things, but the UI or some other "legacy" interface wants to know which one the requested encoder is doing. It first tries to get the one associated with the airing in the current time slot (as if it didn't have padding), if we are outside of that because we are recording all padding at the moment, it picks the one that will be ending soonest. I will after everything gets the green light be adding new API methods that deal with the multiple recordings situation so that we can update the UI and maybe if we get a new web interface, it can use the new API's too.
— You are receiving this because you commented. Reply to this email directly, view it on GitHub https://github.com/google/sagetv/issues/259#issuecomment-278132511, or mute the thread https://github.com/notifications/unsubscribe-auth/ANEIDJCI7L23Ey94-ICwI4k1EyxZxUU5ks5raNZmgaJpZM4L0Y1m .
Sort of unrelated, but definitely of interest to me from a performance perspective. I would like to bump us up to Java 1.8 so we can use Stamped Lock optimistic reading on the database and other objects. In many cases it's a lot faster than the reentrant locks we are using right now. I guess you'd need to take a vote on that in the forums, but it's definitely the right direction if we are looking to make things even more responsive.
I don't think there's really much of a performance issue relating to the locking currently. It's been awhile since I've even thought about trying to optimize performance on the PC...the main focus had been on lower powered embedded devices for awhile because all of our performance tests we run just have such great results on the PC it didn't need any enhancement (although all the embedded enhancements benefited the PC version too). If we did want to look into performance, I'd first want to figure out what was actually slow rather than just trying things that we think may make it better. :) There's definitely other areas that would benefit more from optimization than changing the locking techniques on the database (such as Carny's analysis although it runs in a low priority background thread, inefficiencies in the Scheduler, Wizard DB maintenance, etc.). We even wrote a compiler for the STV at Google, but I removed it from the open source version because it made the UI code much more nasty and hard to comprehend and it wasn't really all that compatible with the plugin system either (and UI response time on the PC is generally under one frame duration, which isn't true on embedded)
On Tue, Feb 7, 2017 at 11:03 PM, Joseph Shuttlesworth < notifications@github.com> wrote:
Sort of unrelated, but definitely of interest to me from a performance perspective. I would like to bump us up to Java 1.8 so we can use Stamped Lock http://docs.oracle.com/javase/8/docs/api/java/util/concurrent/locks/StampedLock.html optimistic reading on the database and other objects. In many cases it's a lot faster than the reentrant locks we are using right now. I guess you'd need to take a vote on that in the forums, but it's definitely the right direction if we are looking to make things even more responsive.
— You are receiving this because you commented. Reply to this email directly, view it on GitHub https://github.com/google/sagetv/issues/259#issuecomment-278246012, or mute the thread https://github.com/notifications/unsubscribe-auth/ANEIDHuqGlcFgZFfhmYc8ckenQInly3Mks5raWjSgaJpZM4L0Y1m .
-- Jeffrey Kardatzke jkardatzke@google.com Google, Inc.
Not that I think it's incredible important, but regarding the API return on what is recording on the encoder, I'm wondering if it should instead return what airing is starting next. I'm thinking the primary location people will come in contact with the api call is in the recording schedule view (primarily, the parallel view). People are more often in here to see what is recording next, more so than what is currently recording - and as such, it should show what is coming up, instead of what is already finished, but is still collecting padding.
That said, I wonder if there should be added a MediaFile[] GetCaptureDeviceCurrentRecordFiles api call as well that will return a list of the currently recording file, for future UI use.
You are correct that it generally doesn't really matter as long as we are showing something that's actually recording. I assume you mean show whatever airing has started most recently. You don't want it returning an airing that isn't even recording yet. The reason I want to keep it mostly aligned with the current time slot when possible is because there are views that will look very odd (such as the parallel schedule). Also I believe the UI normally gets a list of media files, then just sees a recording as in progress or complete so why it's in that status becomes irrelevant. I'm not sure how often we query a specific encoder through the API for the file it claims to be using other than in the web interface. Often we are indirectly doing it when we start playback on something that is currently recording, but I can manage that situation without the UI needing to know what's really going on.
@CraziFuzzy To your most recent comment, I mentioned already that I will be adding things like that to the API once this becomes a normal part of SageTV. Until we get to that point I don't want to provide API methods that might need to be changed after a forward thinking person is already using them.
@Narflex I was wondering about multiConflictCapable. Is the reason we need to query the client because older STV's don't support it?
Yeah, that was a new feature we added for Google Fiber.
Jeff Kardatzke Sent from my Android
On Feb 16, 2017 6:51 PM, "Joseph Shuttlesworth" notifications@github.com wrote:
@Narflex https://github.com/Narflex I was wondering about multiConflictCapable. Is the reason we need to query the client because older STV's don't support it?
— You are receiving this because you were mentioned. Reply to this email directly, view it on GitHub https://github.com/google/sagetv/issues/259#issuecomment-280534790, or mute the thread https://github.com/notifications/unsubscribe-auth/ANEIDBihnqAC_owsduVp05ALXN2EC-f3ks5rdQstgaJpZM4L0Y1m .
This is the first thing I've run into that actually would require some UI updates to handle multiple recordings on one encoder in a manner that would not be confusing. I'm going to do the single request for now and we can update it later. This is complex enough as it is.
I've walked through everything in Seeker twice now and I think I have a fairly good grasp on the flow of things and why some things are the way they are. I pretty much had to rewrite how an encoder is selected/taken over in what I'm calling Seeker2 because it really became too convoluted once you add multiple recordings per encoder and the possibility that we could be recording different programs on the same frequency. I believe I have also fixed the majority of the hard to follow race conditions involved with that process without creating significant synchronization barriers and greatly improving the readability of the code. It should be a lot easier for anyone in the future that wants to tweak how the encoders are prioritized for that process.
The multiple programs on the same frequency thing is implemented provisionally right now since making that really work would require changes to scheduler and I'm not ready to work on that at the moment, but I do have a few ideas on where we can improve that process. I will likely branch off a beta scheduler too because I realize I'm pretty much working on the heart of SageTV and the last thing I want to do is cause grief for someone because of code changes to add a feature they don't even care about.
I did get everything to compile yesterday with the new splitter running and found a few issues that highlighted that some things needed to be changed more than I originally expected. Fortunately all of these changes are contained in Seeker2, so I shouldn't be breaking other things. Overall, I'd say I've been making a lot of progress.
Sounds great, I'm anxious to check out this code when it's ready. :)
On Thu, Feb 16, 2017 at 10:37 PM, Joseph Shuttlesworth < notifications@github.com> wrote:
This is the first thing I've run into that actually would require some UI updates to handle multiple recordings on one encoder in a manner that would not be confusing. I'm going to do the single request for now and we can update it later. This is complex enough as it is.
I've walked through everything in Seeker twice now and I think I have a fairly good grasp on the flow of things and why some things are the way they are. I pretty much had to rewrite how an encoder is selected/taken over in what I'm calling Seeker2 because it really became too convoluted once you add multiple recordings per encoder and the possibility that we could be recording different programs on the same frequency. I believe I have also fixed the majority of the hard to follow race conditions involved with that process without creating significant synchronization barriers and greatly improving the readability of the code. It should be a lot easier for anyone in the future that wants to tweak how the encoders are prioritized for that process.
The multiple programs on the same frequency thing is implemented provisionally right now since making that really work would require changes to scheduler and I'm not ready to work on that at the moment, but I do have a few ideas on where we can improve that process. I will likely branch off a beta scheduler too because I realize I'm pretty much working on the heart of SageTV and the last thing I want to do is cause grief for someone because of code changes to add a feature they don't even care about.
I did get everything to compile yesterday with the new splitter running and found a few issues that highlighted that some things needed to be changed more than I originally expected. Fortunately all of these changes are contained in Seeker2, so I shouldn't be breaking other things. Overall, I'd say I've been making a lot of progress.
— You are receiving this because you were mentioned. Reply to this email directly, view it on GitHub https://github.com/google/sagetv/issues/259#issuecomment-280567051, or mute the thread https://github.com/notifications/unsubscribe-auth/ANEIDMDcyF4Wwso2ObaW1dfIVf5qf4H_ks5rdUAqgaJpZM4L0Y1m .
-- Jeffrey Kardatzke jkardatzke@google.com Google, Inc.
Yeah, I feel bad for you. It's a good size commit at the moment. Fortunately not much of it actually touches original code you would likely be concerned about. :)
Every time I see this, it makes me laugh.
// NOTE: Because we raced, there's a chance that one of the encoders is actually 'better for
// us' meaning the same program is now recording somewhere else. Use that one so we don't
// record twice (if you've canceled something because you thought it was full, I feel bad
// for you son, I've got 99 problems and your canceled recording ain't one)
I didn't actually write that particular comment. :)
On Fri, Feb 17, 2017 at 6:52 PM, Joseph Shuttlesworth < notifications@github.com> wrote:
Every time I see this, it makes me laugh.
// NOTE: Because we raced, there's a chance that one of the encoders is actually 'better for // us' meaning the same program is now recording somewhere else. Use that one so we don't // record twice (if you've canceled something because you thought it was full, I feel bad // for you son, I've got 99 problems and your canceled recording ain't one)
— You are receiving this because you were mentioned. Reply to this email directly, view it on GitHub https://github.com/google/sagetv/issues/259#issuecomment-280816340, or mute the thread https://github.com/notifications/unsubscribe-auth/ANEIDNYaY624J8z4DAwd6SluBrauM_1rks5rdlz5gaJpZM4L0Y1m .
-- Jeffrey Kardatzke jkardatzke@google.com Google, Inc.
@Narflex I managed to make something strange happen a few times. I know I've seen this kind of thing with the normal Seeker, but it was because the file wasn't growing in size or there were some issues with formatting of the file. In this case, the file is definitely correct and present. It also plays back without issue outside of SageTV, but apparently the formatting isn't being detected. What's the typical flow of this section of code in VideoFrame? What should I be looking for? I was having a hard time following it because it seems to move around a lot.
Sun 2/19 13:05:49.261 [AsyncWatch@447fc9] VideoFrame.watch(A[46102200,45853644,"NHL Hockey",11458@0219.12:30,150,T])
Sun 2/19 13:05:50.184 [VideoFrame-SAGETV_PROCESS_LOCAL_UI@92fb9d] VF processing job VFJob[WatchMF r=0.0 t=0 file=MediaFile[id=46574852 A[46102200,45853644,"NHL Hockey",11458@0219.12:30,150,T] mask=TV host=sage01 encodedBy=null format=MPEG2-PS 0:00:00 0 kbps [] \\nas01\Recordings\sage01\NHLHockey-WashingtonCapitalsatNewYorkRangers-46102200-0.mpg, Seg0[Sun 2/19 13:05:49.396-Wed 12/31 19:00:00.000]] ifn=null] nPlayin=false
Sun 2/19 13:05:50.215 [VideoFrame-SAGETV_PROCESS_LOCAL_UI@92fb9d] VF processing job VFJob[LoadMF r=0.0 t=0 file=MediaFile[id=46574852 A[46102200,45853644,"NHL Hockey",11458@0219.12:30,150,T] mask=TV host=sage01 encodedBy=null format=MPEG2-PS 0:00:00 0 kbps [] \\nas01\Recordings\sage01\NHLHockey-WashingtonCapitalsatNewYorkRangers-46102200-0.mpg, Seg0[Sun 2/19 13:05:49.396-Wed 12/31 19:00:00.000]] ifn=null] nPlayin=false
Sun 2/19 13:05:50.244 [VideoFrame-SAGETV_PROCESS_LOCAL_UI@92fb9d] VF waiting for data to appear in new file...liveWait=-838
Sun 2/19 13:05:50.286 [VideoFrame-SAGETV_PROCESS_LOCAL_UI@92fb9d] VF thread is now waiting for 0:00:00.200
....
Sun 2/19 13:05:51.083 [VideoFrame-SAGETV_PROCESS_LOCAL_UI@92fb9d] VF processing job VFJob[LoadMF r=0.0 t=0 file=MediaFile[id=46574852 A[46102200,45853644,"NHL Hockey",11458@0219.12:30,150,T] mask=TV host=sage01 encodedBy=null format=MPEG2-PS 0:00:00 0 kbps [] \\nas01\Recordings\sage01\NHLHockey-WashingtonCapitalsatNewYorkRangers-46102200-0.mpg, Seg0[Sun 2/19 13:05:49.396-Wed 12/31 19:00:00.000]] ifn=null] nPlayin=false
Sun 2/19 13:05:51.091 [VideoFrame-SAGETV_PROCESS_LOCAL_UI@92fb9d] VF waiting for data to appear in new file...liveWait=-1690
I didn't see any log lines there about it even trying to do format detection, which would make me think that the capture device is returning zero for the recorded bytes call.
Jeff Kardatzke Sent from my Android
On Feb 19, 2017 10:42 AM, "Joseph Shuttlesworth" notifications@github.com wrote:
@Narflex https://github.com/Narflex I managed to make something strange happen a few times. I know I've seen this kind of thing with the normal Seeker, but it was because the file wasn't growing in size or there were some issues with formatting of the file. In this case, the file is definitely correct and present. It also plays back without issue outside of SageTV, but apparently the formatting isn't being detected. What's the typical flow of this section of code in VideoFrame? What should I be looking for? I was having a hard time following it because it seems to move around a lot.
Sun 2/19 13:05:49.261 [AsyncWatch@447fc9] VideoFrame.watch(A[46102200,45853644,"NHL Hockey",11458@0219.12:30,150,T]) Sun 2/19 13:05:50.184 [VideoFrame-SAGETV_PROCESS_LOCAL_UI@92fb9d] VF processing job VFJob[WatchMF r=0.0 t=0 file=MediaFile[id=46574852 A[46102200,45853644,"NHL Hockey",11458@0219.12:30,150,T] mask=TV host=sage01 encodedBy=null format=MPEG2-PS 0:00:00 0 kbps [] \nas01\Recordings\sage01\NHLHockey-WashingtonCapitalsatNewYorkRangers-46102200-0.mpg, Seg0[Sun 2/19 13:05:49.396-Wed 12/31 19:00:00.000]] ifn=null] nPlayin=false Sun 2/19 13:05:50.215 [VideoFrame-SAGETV_PROCESS_LOCAL_UI@92fb9d] VF processing job VFJob[LoadMF r=0.0 t=0 file=MediaFile[id=46574852 A[46102200,45853644,"NHL Hockey",11458@0219.12:30,150,T] mask=TV host=sage01 encodedBy=null format=MPEG2-PS 0:00:00 0 kbps [] \nas01\Recordings\sage01\NHLHockey-WashingtonCapitalsatNewYorkRangers-46102200-0.mpg, Seg0[Sun 2/19 13:05:49.396-Wed 12/31 19:00:00.000]] ifn=null] nPlayin=false Sun 2/19 13:05:50.244 [VideoFrame-SAGETV_PROCESS_LOCAL_UI@92fb9d] VF waiting for data to appear in new file...liveWait=-838 Sun 2/19 13:05:50.286 [VideoFrame-SAGETV_PROCESS_LOCAL_UI@92fb9d] VF thread is now waiting for 0:00:00.200 .... Sun 2/19 13:05:51.083 [VideoFrame-SAGETV_PROCESS_LOCAL_UI@92fb9d] VF processing job VFJob[LoadMF r=0.0 t=0 file=MediaFile[id=46574852 A[46102200,45853644,"NHL Hockey",11458@0219.12:30,150,T] mask=TV host=sage01 encodedBy=null format=MPEG2-PS 0:00:00 0 kbps [] \nas01\Recordings\sage01\NHLHockey-WashingtonCapitalsatNewYorkRangers-46102200-0.mpg, Seg0[Sun 2/19 13:05:49.396-Wed 12/31 19:00:00.000]] ifn=null] nPlayin=false Sun 2/19 13:05:51.091 [VideoFrame-SAGETV_PROCESS_LOCAL_UI@92fb9d] VF waiting for data to appear in new file...liveWait=-1690
— You are receiving this because you were mentioned. Reply to this email directly, view it on GitHub https://github.com/google/sagetv/issues/259#issuecomment-280938884, or mute the thread https://github.com/notifications/unsubscribe-auth/ANEIDDgSiWX67zQpLighWSrOdfGAH2ySks5reI0wgaJpZM4L0Y1m .
Thanks for the hint.
I keep seeing how we specially treat these live streams. I know what a buffered live stream is, but are there any examples of how we interact with a live stream? I presume it's a stream that's not being written anywhere since all we do is tune in a channel and I guess some endpoint is waiting for the stream?
That feature was never used in our own software IIRC, it was for when we did a project for a customer who wanted live playback without disk access like most video capture apps do...Maybe also for another customer who was doing a diskless application.
I'd be fine with a surgical removal of all the non buffered live playback code if you wanted to do that in its own commit.
Jeff Kardatzke Sent from my Android
On Feb 21, 2017 1:44 PM, "Joseph Shuttlesworth" notifications@github.com wrote:
I keep seeing how we specially treat these live streams. I know what a buffered live stream is, but are there any examples of how we interact with a live stream? I presume it's a stream that's not being written anywhere since all we do is tune in a channel and I guess some endpoint is waiting for the stream?
— You are receiving this because you were mentioned. Reply to this email directly, view it on GitHub https://github.com/google/sagetv/issues/259#issuecomment-281491036, or mute the thread https://github.com/notifications/unsubscribe-auth/ANEIDL7MWU_7IidJVn4Zm9nDgeb3IdO1ks5re1qpgaJpZM4L0Y1m .
Ok. I'll just not provision for it in the new Seeker. I was carefully working around it and and making sure I didn't change its functionality, but now I'm wishing I asked sooner.
To be clear, I'm not going to remove it from anything that already exists. I might do that later after everything is more settled.
There are a lot of little 'removal' projects that could go on. Removing any semblance of the 'embedded' flag, for instance, would clean up a lot of portions of the code.
I've been doing that on all of the new things I've been writing.
I also wrote clean standardized replacements for how SageTV reads and writes files. Other than for historical reference I don't think we need the old classes anymore, so technically they could be removed. On the other hand, I'm not sure if there's any significant cost in any regard to having a class that is never used other than maybe longer compile times.
I like leaving stuff around....There's been far too many times I come back to using something again later to throw stuff out....And hoarding runs in my family. 😀
Jeff Kardatzke Sent from my Android
On Feb 22, 2017 7:04 AM, "Joseph Shuttlesworth" notifications@github.com wrote:
I also wrote clean standardized replacements for how SageTV reads and writes files. Other than for historical reference I don't think we need the old classes anymore, so technically they could be removed. On the other hand, I'm not sure if there's any significant cost in any regard to having a class that is never used other than maybe longer compile times.
— You are receiving this because you were mentioned. Reply to this email directly, view it on GitHub https://github.com/google/sagetv/issues/259#issuecomment-281694670, or mute the thread https://github.com/notifications/unsubscribe-auth/ANEIDPTPrTa5GanNLLB9MQhMRNZY8i_7ks5rfE6CgaJpZM4L0Y1m .
This is GitHub...if we removed them, they wouldn't completely go away. You can always find them prior to the commit that removed them. :) I think versioning systems help me get over the anxiety of removing things.
@Narflex I figured out what was going on when playback would fail. I had a situation where SWITCH was being called a few seconds after START and it was to transition to the same show. The file incremented, but the player didn't know what to do so it just froze. I added lots of logging to figure out where the problem was and concluded that the code was too complex in that it was doing roughly the same thing on up to 4 different sets of recording data and predictably the end result was harder to follow than it should be.
I have fixed this by moving the less expensive labyrinth of conditions that are applied to the current recordings, the next recordings and the default recordings into a method that checks all possibilities and then returns a value from each individual recording in the set of recordings indicating the determined action. The only thing that really separates them after that is what is done based on that feedback. The force watches also go through this process.
I know one result doesn't mean the same thing for each one of them which is why I boiled it down to 9 different outcomes that so far are working very well at sorting out all of the permutations of issues you can run into while trying to assign airings correctly. Having a fixed number of returned values even when some are only possible under specific circumstances might not be as efficient as targeting what we think we will see or have seen, but I think it keeps things better organized and makes it easier to follow why we got what we got because the way we get there is the same in every case.
I have never used intelligent recordings myself, so I was surprised to see what looks like it could just endlessly record the last channel until it has must see recording or another intelligent recording scheduled. Is that pretty much accurate?
That last statement is not correct...the defaultRecord will be what's on the same channel and it wouldn't actually do that recording unless the encoder was under live control or it had to perform an intelligent/favorite/manual recording. There's a block where it'll set the defaultRecord to null in certain situations and that's what'll block it from continuing to record the same channel (there's a comment which explains why it does that because apparently it used to keep recording the same channel in that case, but complaints caused me to change that behavior).
On Thu, Feb 23, 2017 at 12:18 PM, Joseph Shuttlesworth < notifications@github.com> wrote:
@Narflex https://github.com/Narflex I figured out what was going on when playback would fail. I had a situation where SWITCH was being called a few seconds after START and it was to transition to the same show. The file incremented, but the player didn't know what to do so it just froze. I added lots of logging to figure out where the problem was and concluded that the code was too complex in that it was doing roughly the same thing on up to 4 different sets of recording data and predictably the end result was harder to follow than it should be.
I have fixed this by moving the less expensive labyrinth of conditions that are applied to the current recordings, the next recordings and the default recordings into a method that checks all possibilities and then returns a value from each individual recording in the set of recordings indicating the determined action. The only thing that really separates them after that is what is done based on that feedback. The force watches also go through this process.
I know one result doesn't mean the same thing for each one of them which is why I boiled it down to 9 different outcomes that so far are working very well at sorting out all of the permutations of issues you can run into while trying to assign airings correctly. Having a fixed number of returned values even when some are only possible under specific circumstances might not be as efficient as targeting what we think we will see or have seen, but I think it keeps things better organized and makes it easier to follow why we got what we got because the way we get there is the same in every case.
I have never used intelligent recordings myself, so I was surprised to see what looks like it could just endlessly record the last channel until it has must see recording or another intelligent recording scheduled. Is that pretty much accurate?
— You are receiving this because you were mentioned. Reply to this email directly, view it on GitHub https://github.com/google/sagetv/issues/259#issuecomment-282108633, or mute the thread https://github.com/notifications/unsubscribe-auth/ANEIDDfe_6zm_UMIfIyGkHnaQRrImFHOks5rfemagaJpZM4L0Y1m .
-- Jeffrey Kardatzke jkardatzke@google.com Google, Inc.
I saw that comment. I must have misunderstood what you were actually referring to when I read it.
I believe that I am still completely honoring what you are trying to do since if we need a default recording, it will check the force watches (yes plural because I am planning on being able to use multiple programs on the same frequency) to find what's on right now. If more than one client is using the same encoder, you're locked into that frequency unless there's another encoder available. We don't have a dialog to display on the other clients asking to change the channel so another client can watch something else do we? I guess that's called yelling. :) Now that I think about it, I remember the behavior that comment was talking about.
The default gets dropped if the clients go away and it has no other reason to exist (manual recording or scheduled favorite).
There is no current mechanism for asking if one clients request can override what other clients are doing WRT live tv. The only one is if Seeker needs to change the channel to do a scheduled recording.
Jeff Kardatzke Sent from my Android
On Feb 23, 2017 5:01 PM, "Joseph Shuttlesworth" notifications@github.com wrote:
The default gets dropped if the clients go away and it has no other reason to exist (manual recording or scheduled favorite).
— You are receiving this because you were mentioned.
Reply to this email directly, view it on GitHub https://github.com/google/sagetv/issues/259#issuecomment-282171811, or mute the thread https://github.com/notifications/unsubscribe-auth/ANEIDM3qV1DLXPaDmu3Hq_K_KkF9bNwDks5rfivPgaJpZM4L0Y1m .
I'd love to see the list of 9 different conditions, and how the resulting actions are prioritized.
I have written a splitter that relies on the media server and also can for legacy purposes use a file as a source. My reasoning for why I would task the media server with this activity is because SageTV is cross-platform and this is the only place where everything can easily meet up. I also like the tidiness of of the recording stream going into memory, then being split off into multiple files as opposed to have a primary recording and chopping random bits off to append them to other recordings or using multiple files for each recording which while definitely the most efficient use of space will be very frustrating for both Comskip consistency and people using external transcoding for archival purposes.
The way this works is very simple on the surface. Essentially SageTV gives the encoder a unique filename (integer in hex + .extension) in the path of a real recording directory. This is referred to as a virtual file. It places the file with it's upload ID into a map that's checked very similarly to how the current upload ID process, but is can tell which one it is looking for with absolute certainty even though the connection commands are exactly the same. If it is not known if the encoder will use the media server or not, a monitoring thread is also started for the file that might be created. If the encoder connects to the media server, the monitoring thread if running is stopped immediately. If the file is created and becomes larger than 0 bytes, the monitoring thread will read the file as if it was data coming in through the media server and split it out the same.
When the monitoring thread is being used for input, it will first read and split everything that's in the file at that moment, then it starts to try to calculate the rate that the file is being written so it's not constantly banging against the filesystem. It tries to stay about 64K off from the very end of the file, but it can read and split all the way to the end of the file if the rate fluctuates a lot. The calculation is also regularly adjusted so we don't end up falling hopelessly behind. Over a gigabit connection using SMB I was able to split a 20Mbps stream into 24 files. This is of course well beyond any reasonable usage, but it's good to know it can be done. When I tested locally, I was only limited by how fast my storage is. That same 20Mbps stream was able to be split into 200 files before I started to have problems keeping up.
The monitoring thread is only here as a shim for current network encoders that do not use the media server for various reasons. Everything built into SageTV and network encoders that use the media server will be using this splitter efficiently when this to be beta feature is enabled. The only network encoders that will be excluded from this feature will be ones that don't support SWITCH. I haven't decided if we should try to handle transitioning in place of the network encoder having the ability or not. It could be fairly easy when it's a known format and we already have decent code for it in the media server remuxer.
When the media server is used for input, the file opening is validated just like a normal upload request and it creates a normal FileChannel that only appends to the files it is has been told to split into. I initially added the ability to set positions relative to the first virtual file, but I scrapped it because it was adding additional calculations to every single write and encoders in all cases that I can think of only append data. This is compatible with the media server remuxer too and requires no modifications for it to split the file correctly or transition.
Any time we need to add files or stop writing to files, the encoder is signaled to SWITCH and it is given a new virtual file. As soon as it closes the old virtual file, all of the files we are removing are closed and all of the files that are still being recorded remain open. When the new virtual file is opened all of the files that remained open resume and all of the new files start writing. After the encoder returns that the SWITCH is done, the old virtual file is immediately closed so that no further writing to the splitter can happen even if a media server connection was left open (it will throw an exception). This allows the network encoder to decide when is a good time to transition because it might have a different or better idea of when/where to do this over a what SageTV might determine for it. In testing adding and removing between just 10 files, 200ms was trimmed off just by leaving files open when appropriate through transitions as opposed to closing everything and re-opening everything as appending for the new virtual file. When stopping, you would just tell the encoder to stop and when it returns close the virtual file.
These virtual files are designed so that you can also stream anything to the media server with the same base name (integer in hex) with any extension and it will allow you to write that file and have it split the same as the only file that is actually expected, just with a different file extension. Of course you need to write the A/V stream to the virtual file + extension that it was expecting, but you can also use that virtual file name to stream subtitles that can't be muxed into the other stream for example. This feature was probably one of the bigger pains to design, but I felt like it was a bad idea to do a fixed one to many mapping because we would likely want this flexibility at some point in time anyway.
I now need to work on Seeker to get it to understand how this all works and probably a few small changes to Scheduler. I'll be using the remove back to back padding option as suggested. I'll figure the rest out as I go.