|
|
1.INTRODUCTIONAfter gathering experience for several years, we produced live events and live studio productions involving our students as part of our department’s curriculum (Fig. 1). As an educational institution, we are committed to remaining at the cutting edge of technology in our field. Therefore we had to adapt to new trends and technical possibilities in the world of live productions. Nowday, considering the digitalization technology progress, we are able to transmit audio and video signals in real time over a network based on the Internet Protocol (IP). This allows us to be even more flexible and faster in accomplishing a high-quality production. With motivated and technical interested students we have built the basics of our control room (Figure 1) which can receive camera signals from our whole campus and possibly from everywhere around the globe. The system is essential in the practical part of a lecture about mediatechnic and makes it possible to teach our students the state of art in live video broadcasting. 2.THEORETICAL CONSIDERATIONSTechnical ImplementationThe proposed and developed System is a low-cost and self implemented setup (Fig. 2). Considering a limited budget therefore we have to implement custom hardware in our setup, running a unix operating system with decklink cards for Serial/Standard Digital Interface (SDI) Input on it. Using FFmpeg, we convert Video Signals from the main conventional Camera via SDI to Network Device Interface (NDI), an open protocol developed by NewTek. NDI is widely used for Video over IP setups to “share video across a local area network”[1]. NDI is also a very common standard for Live Broadcast Events. It fits perfectly in our demands due to flexibility and efficiency. [2] The network infrastructure we use, is a standard gigabit network with multicast and unicast support. For our bidirectional connection to the decoding system we are able to use fiber optic connections up to 40G. This can become very important in the near future if we want to stream for example in UHD, 4K or even in 8K Quality. We implemented a mirrored system in encoding and decoding, therefore our decoding system differs only in certain points, such as in the decoding and encoding technique. Instead of using FFmpeg like in our encoding System, we use Gstreamer with ndi support on our decoding Server. The reason for this decision was the unsatisfactory test results of decoding ndi streams with FFmpeg seamlessly and without significant delays or frame drops. A main goal of our developed system is flexibility. The system is completely modular and can be configured for different requirements and events. In addition, the system is easily scalable, for example we can increase the processing power of our encoding and decoding servers to handle a higher resolution or to drive more cameras simultaneously. As well with the help of IP based communication the system is completely independent in its location. Due to compact and moveable racks simple tear up and tear down is possible as well. For handling all the data transferred over the different subnets, layer 3 gigabit switches with 10G uplinks are indispensable. The connection between the production and the video direction can be established with glass fiber for long distance and high bandwidth applications or normal gigabit ethernet cable for shorter distance and low bandwidth applications. For special cases and if the network device supports small form-factor pluggable interfaces up to 10Gbit/s (SFP+), 10 Gigabit Ethernet (10GE) can be used to use an ethernet cable for higher bandwidth over short distances. Further possibilities could be for a mobile production the use of GSM/LTE to transmit the data to the control room. Therefore connections between all the different encoding and decoding systems could be any connection that supports IP. Set-Up and Signal FlowThe signal flow is divided into audio, video and control signals all sent over the network layer. Video signals are converted from SDI to NDI. The signal from the camera is fed in the first patch panel in the encoder rack. Next, the signal is fed into the video router for routing all signals to the right destination. Then each signal is split up. For the vision mixer input five to eight need to be HDMI this leads to four SDI to HDMI converter instead of normal SDI splitter. All signals are fed in the vision mixer to get a preview at the recording location and the encoding server to send the signals over the network to the decoder server. All signals are fed back into a patch panel to the multiview and afterwards into the final vision mixer to mix the live stream feed (Figure 3). Audio signals are initially received by standard analog signals. All signals are split to be able to mix parallel the live audio in the studio and the live stream audio. This is handled by two separated mixers. The stage box at the studio transmits all audio signal in both directions over the network by Dante, “a combination of software, hardware, and network protocols that deliver uncompressed, multi-channel, low-latency digital audio over a standard Ethernet network using Layer 3 IP packets.” [3] Addition audio sources are either send by NDI together with the video or by Dante from the audio pc. At the audio pc, all audio routing also takes place. Currently we use a decoding system in our live control room and a encoding system at the production location depending on the necessity of the broadcasting production. Using the given local network links, our setup is flexible and the number of operating systems can be adapted corresponding the requirements. We can easily increase the number of studios or external cameras and mix them all together in a control room. 3.HANDS ON BROADCASTThe system has been applied on several opportunities in the past years. So for instance, the yearly regional finale of the First Lego League competition, is organized at Offenburg University (Figure 5 and 7). This offers a excellent challenge for our students to be involved in a live event that is broadcast via a LIVE stream. An overview of the students’ work in the control room is given in (Figure 8). The design control romm used was very complex and all available capacities were almost exhausted up to eight cameras and multiple return channels. The particular aim of the system is not only that each student only understands the whole system, but rather that the students are able to handle the system. This opens up the possibility to delve deeper into areas of interest to them and generally work with the system to get to know each component. Finally, students should be able to develop their own ideas and implement them in the system. Such an educational event is a perfect opportunity for students to explore and learn about the workflow of a fully IP-based live video and audio production. Participants can learn to set up network devices, use them over the network, and apply their current knowledge of video and audio in combination with the new ip-based standard (Figure 6). Goals of Practical WorkThe most important application of the developed sistem is to be part of the courses and lectures in live video broadcasting. As a practical application for the attending students is to learn the workflow and how to realize live broadcasting events and studio productions. After the participants have acquired the basic knowledge about the technical fundamentals of the system, they continue with the application of the system. The students learn the organisation and process of their work by their own involvement and finally they have to adapt to the unique situation and apply their newly acquired knowledge. This “learning by doing” method involves all participants in bringing their unique skills into production. As a result, all students get a good insight into the technology they use. 4.BROADCASTING OF LIVE STREAMS A NEW PARADIGM FOR EDUCATION AND TRAINING IN PANDEMIC TIMESDue to covid-19 pandemic our university had to adapt to new guidelines with new forms of teaching especially digital teaching. Live Streaming is an ideal way of providing students with knowledge in cases where it is not possible to give a lecture in the classroom. Therefore, we could use our knowledge and infrastructure to enable a better quality of learning and to achieve higher learning goals than other providers could offer in the same short time. By adapting to the specific learning subjects, we can achieve the highest goals in Bloom’s taxonomy [5]-[7] such as synthesis and evaluation[8]. On the one hand we achieve these goals for our teaching subject and on the other hand we offer the platform for teaching other knowledge topics like media technology or science, technology, engineering, and mathematics (STEM) education [9]. 5.CONCLUSIONThe system achieved its purpose in providing state of the art technology to give the possibility for students to adapt to the future way of broadcasting. With combining theoretical lectures and practical experience we are able to teach a new level of media technologies. Due to the modular system we can easily provide continuous improvement in hardware and software to catch up with the latest technology. Students are able to get a hands-on impression in how today’s broadcast systems work. We can conclude that we have established a system at our university that we can be useed for teaching, where students can training “Hands On” the use of live production equipment. REFERENCES„NDI (Network Device Interface) Overview,”
(2020) https://support.newtek.com/hc/en-us/articles/217662358-NDI-Network-Device-Interface-Overview Google Scholar
Aleksandersen, David,
“WHAT IS NDI® (NETWORK DEVICE INTERFACE)?,”
(2017) https://newsandviews.dataton.com/what-is-ndi-network-device-interface Google Scholar
„Dante (networking),”
(2020) https://en.wikipedia.org/wiki/Dante_(networking) Google Scholar
“Huck Thomas: Audio over IP im Produktionskontext,”
(2020). Google Scholar
Bloom, Benjamin S.:,
“Taxonomie von Lernzielen im kognitiven Bereich,”
Auflage, Beltz Verlag, Weinheim und Basel, Germany/Switzerland
(1972). Google Scholar
Max D. Engelhart, Edward J. Furst, Walker H. Hill, Benjamin S.,
“Bloom: Taxonomie von Lernzielen im kognitiven Bereich, Beltz Verlag, Auflage 5,”
(2001). Google Scholar
D. R. Krathwohl, B. S. Bloom, B. M. Bertram,
“Taxonomy of Educational Objectives, the Classification of Educational Goals. Handbook II: Affective Domain,”
David McKay Co. Inc., New York
(1973). Google Scholar
Anderson, Lorin W. & Krathwohl, David R.,
“A Taxonomy for Learning, Teaching, and Assessing. A Revision of Bloom’s Taxonomy of Educational Objectives,”
Addison-Wesley, New York
(2001). Google Scholar
Hallinen, Judith,
“STEM Education Curriculum,”
ENCYCLOPÆDIA BRITANNICA, 2020). Google Scholar
|